00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1827 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3088 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.446 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.458 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.469 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:06.470 > git config core.sparsecheckout # timeout=10 00:00:06.481 > git read-tree -mu HEAD # timeout=10 00:00:06.499 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:06.520 Commit message: "inventory/dev: add missing long names" 00:00:06.520 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:06.623 [Pipeline] Start of Pipeline 00:00:06.633 [Pipeline] library 00:00:06.634 Loading library shm_lib@master 00:00:06.634 Library shm_lib@master is cached. Copying from home. 00:00:06.650 [Pipeline] node 00:00:21.652 Still waiting to schedule task 00:00:21.652 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:19.962 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:01:19.964 [Pipeline] { 00:01:19.975 [Pipeline] catchError 00:01:19.977 [Pipeline] { 00:01:19.992 [Pipeline] wrap 00:01:20.002 [Pipeline] { 00:01:20.010 [Pipeline] stage 00:01:20.012 [Pipeline] { (Prologue) 00:01:20.033 [Pipeline] echo 00:01:20.034 Node: VM-host-SM16 00:01:20.040 [Pipeline] cleanWs 00:01:20.048 [WS-CLEANUP] Deleting project workspace... 00:01:20.048 [WS-CLEANUP] Deferred wipeout is used... 00:01:20.054 [WS-CLEANUP] done 00:01:20.226 [Pipeline] setCustomBuildProperty 00:01:20.290 [Pipeline] nodesByLabel 00:01:20.291 Found a total of 1 nodes with the 'sorcerer' label 00:01:20.301 [Pipeline] httpRequest 00:01:20.305 HttpMethod: GET 00:01:20.305 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:01:20.306 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:01:20.307 Response Code: HTTP/1.1 200 OK 00:01:20.308 Success: Status code 200 is in the accepted range: 200,404 00:01:20.309 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:01:20.870 [Pipeline] sh 00:01:21.151 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:01:21.170 [Pipeline] httpRequest 00:01:21.174 HttpMethod: GET 00:01:21.175 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:21.176 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:21.177 Response Code: HTTP/1.1 200 OK 00:01:21.178 Success: Status code 200 is in the accepted range: 200,404 00:01:21.178 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:32.216 [Pipeline] sh 00:01:32.492 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:35.777 [Pipeline] sh 00:01:36.055 + git -C spdk log --oneline -n5 00:01:36.055 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:36.055 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:36.055 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:36.055 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:36.055 3b33f4333 test/nvme/cuse: Fix typo 00:01:36.073 [Pipeline] writeFile 00:01:36.089 [Pipeline] sh 00:01:36.367 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.377 [Pipeline] sh 00:01:36.720 + cat autorun-spdk.conf 00:01:36.721 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.721 SPDK_TEST_NVME=1 00:01:36.721 SPDK_TEST_FTL=1 00:01:36.721 SPDK_TEST_ISAL=1 00:01:36.721 SPDK_RUN_ASAN=1 00:01:36.721 SPDK_RUN_UBSAN=1 00:01:36.721 SPDK_TEST_XNVME=1 00:01:36.721 SPDK_TEST_NVME_FDP=1 00:01:36.721 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.726 RUN_NIGHTLY=1 00:01:36.728 [Pipeline] } 00:01:36.743 [Pipeline] // stage 00:01:36.761 [Pipeline] stage 00:01:36.762 [Pipeline] { (Run VM) 00:01:36.775 [Pipeline] sh 00:01:37.050 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.050 + echo 'Start stage prepare_nvme.sh' 00:01:37.050 Start stage prepare_nvme.sh 00:01:37.050 + [[ -n 1 ]] 00:01:37.050 + disk_prefix=ex1 00:01:37.050 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:37.050 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:37.050 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:37.050 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.050 ++ SPDK_TEST_NVME=1 00:01:37.050 ++ SPDK_TEST_FTL=1 00:01:37.050 ++ SPDK_TEST_ISAL=1 00:01:37.050 ++ SPDK_RUN_ASAN=1 00:01:37.050 ++ SPDK_RUN_UBSAN=1 00:01:37.050 ++ SPDK_TEST_XNVME=1 00:01:37.050 ++ SPDK_TEST_NVME_FDP=1 00:01:37.050 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.050 ++ RUN_NIGHTLY=1 00:01:37.050 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:37.050 + nvme_files=() 00:01:37.050 + declare -A nvme_files 00:01:37.050 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.050 + nvme_files['nvme.img']=5G 00:01:37.050 + nvme_files['nvme-cmb.img']=5G 00:01:37.051 + nvme_files['nvme-multi0.img']=4G 00:01:37.051 + nvme_files['nvme-multi1.img']=4G 00:01:37.051 + nvme_files['nvme-multi2.img']=4G 00:01:37.051 + nvme_files['nvme-openstack.img']=8G 00:01:37.051 + nvme_files['nvme-zns.img']=5G 00:01:37.051 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.051 + (( SPDK_TEST_FTL == 1 )) 00:01:37.051 + nvme_files["nvme-ftl.img"]=6G 00:01:37.051 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.051 + nvme_files["nvme-fdp.img"]=1G 00:01:37.051 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:37.051 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.051 + for nvme in "${!nvme_files[@]}" 00:01:37.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:37.308 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:37.308 + for nvme in "${!nvme_files[@]}" 00:01:37.308 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:37.308 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.308 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:37.308 + echo 'End stage prepare_nvme.sh' 00:01:37.308 End stage prepare_nvme.sh 00:01:37.319 [Pipeline] sh 00:01:37.597 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.597 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:37.597 00:01:37.597 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:37.597 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:37.597 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:37.597 HELP=0 00:01:37.597 DRY_RUN=0 00:01:37.597 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:37.597 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:37.597 NVME_AUTO_CREATE=0 00:01:37.597 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:37.597 NVME_CMB=,,,, 00:01:37.597 NVME_PMR=,,,, 00:01:37.597 NVME_ZNS=,,,, 00:01:37.597 NVME_MS=true,,,, 00:01:37.597 NVME_FDP=,,,on, 00:01:37.597 SPDK_VAGRANT_DISTRO=fedora38 00:01:37.597 SPDK_VAGRANT_VMCPU=10 00:01:37.597 SPDK_VAGRANT_VMRAM=12288 00:01:37.597 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.597 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.597 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.597 SPDK_OPENSTACK_NETWORK=0 00:01:37.597 VAGRANT_PACKAGE_BOX=0 00:01:37.597 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.597 FORCE_DISTRO=true 00:01:37.597 VAGRANT_BOX_VERSION= 00:01:37.597 EXTRA_VAGRANTFILES= 00:01:37.597 NIC_MODEL=e1000 00:01:37.597 00:01:37.597 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:37.597 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:40.876 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.809 ==> default: Creating image (snapshot of base box volume). 00:01:41.809 ==> default: Creating domain with the following settings... 00:01:41.809 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715775770_86817483ac9349747af5 00:01:41.809 ==> default: -- Domain type: kvm 00:01:41.809 ==> default: -- Cpus: 10 00:01:41.809 ==> default: -- Feature: acpi 00:01:41.809 ==> default: -- Feature: apic 00:01:41.809 ==> default: -- Feature: pae 00:01:41.809 ==> default: -- Memory: 12288M 00:01:41.809 ==> default: -- Memory Backing: hugepages: 00:01:41.809 ==> default: -- Management MAC: 00:01:41.809 ==> default: -- Loader: 00:01:41.809 ==> default: -- Nvram: 00:01:41.809 ==> default: -- Base box: spdk/fedora38 00:01:41.809 ==> default: -- Storage pool: default 00:01:41.809 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715775770_86817483ac9349747af5.img (20G) 00:01:41.809 ==> default: -- Volume Cache: default 00:01:41.809 ==> default: -- Kernel: 00:01:41.809 ==> default: -- Initrd: 00:01:41.809 ==> default: -- Graphics Type: vnc 00:01:41.809 ==> default: -- Graphics Port: -1 00:01:41.809 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.809 ==> default: -- Graphics Password: Not defined 00:01:41.809 ==> default: -- Video Type: cirrus 00:01:41.809 ==> default: -- Video VRAM: 9216 00:01:41.809 ==> default: -- Sound Type: 00:01:41.809 ==> default: -- Keymap: en-us 00:01:41.809 ==> default: -- TPM Path: 00:01:41.809 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.809 ==> default: -- Command line args: 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:41.809 ==> default: -> value=-drive, 00:01:41.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:41.809 ==> default: -> value=-device, 00:01:41.809 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.067 ==> default: Creating shared folders metadata... 00:01:42.067 ==> default: Starting domain. 00:01:44.019 ==> default: Waiting for domain to get an IP address... 00:02:02.093 ==> default: Waiting for SSH to become available... 00:02:03.036 ==> default: Configuring and enabling network interfaces... 00:02:08.295 default: SSH address: 192.168.121.89:22 00:02:08.295 default: SSH username: vagrant 00:02:08.295 default: SSH auth method: private key 00:02:10.195 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:18.347 ==> default: Mounting SSHFS shared folder... 00:02:18.914 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.914 ==> default: Checking Mount.. 00:02:19.848 ==> default: Folder Successfully Mounted! 00:02:19.848 ==> default: Running provisioner: file... 00:02:20.782 default: ~/.gitconfig => .gitconfig 00:02:21.039 00:02:21.039 SUCCESS! 00:02:21.039 00:02:21.039 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:21.039 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.039 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:21.039 00:02:21.048 [Pipeline] } 00:02:21.068 [Pipeline] // stage 00:02:21.076 [Pipeline] dir 00:02:21.077 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:21.079 [Pipeline] { 00:02:21.092 [Pipeline] catchError 00:02:21.093 [Pipeline] { 00:02:21.106 [Pipeline] sh 00:02:21.384 + vagrant ssh-config --host vagrant+ 00:02:21.384 sed -ne /^Host/,$p 00:02:21.384 + tee ssh_conf 00:02:25.567 Host vagrant 00:02:25.567 HostName 192.168.121.89 00:02:25.567 User vagrant 00:02:25.567 Port 22 00:02:25.567 UserKnownHostsFile /dev/null 00:02:25.567 StrictHostKeyChecking no 00:02:25.567 PasswordAuthentication no 00:02:25.567 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:25.567 IdentitiesOnly yes 00:02:25.567 LogLevel FATAL 00:02:25.567 ForwardAgent yes 00:02:25.567 ForwardX11 yes 00:02:25.567 00:02:25.581 [Pipeline] withEnv 00:02:25.584 [Pipeline] { 00:02:25.602 [Pipeline] sh 00:02:25.879 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.879 source /etc/os-release 00:02:25.879 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.879 # Minimal, systemd-like check. 00:02:25.879 if [[ -e /.dockerenv ]]; then 00:02:25.879 # Clear garbage from the node's name: 00:02:25.879 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.879 # $HOSTNAME is the actual container id 00:02:25.879 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.879 if mountpoint -q /etc/hostname; then 00:02:25.879 # We can assume this is a mount from a host where container is running, 00:02:25.879 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.879 container="$(< /etc/hostname) ($agent)" 00:02:25.879 else 00:02:25.879 # Fallback 00:02:25.879 container=$agent 00:02:25.879 fi 00:02:25.879 fi 00:02:25.879 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.879 00:02:26.148 [Pipeline] } 00:02:26.166 [Pipeline] // withEnv 00:02:26.175 [Pipeline] setCustomBuildProperty 00:02:26.187 [Pipeline] stage 00:02:26.189 [Pipeline] { (Tests) 00:02:26.206 [Pipeline] sh 00:02:26.483 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.496 [Pipeline] timeout 00:02:26.496 Timeout set to expire in 40 min 00:02:26.498 [Pipeline] { 00:02:26.512 [Pipeline] sh 00:02:26.791 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.358 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:02:27.369 [Pipeline] sh 00:02:27.645 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.915 [Pipeline] sh 00:02:28.187 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.468 [Pipeline] sh 00:02:28.818 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:28.818 ++ readlink -f spdk_repo 00:02:28.818 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.818 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.818 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.818 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.818 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.818 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.818 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.818 + cd /home/vagrant/spdk_repo 00:02:28.818 + source /etc/os-release 00:02:28.818 ++ NAME='Fedora Linux' 00:02:28.818 ++ VERSION='38 (Cloud Edition)' 00:02:28.818 ++ ID=fedora 00:02:28.818 ++ VERSION_ID=38 00:02:28.818 ++ VERSION_CODENAME= 00:02:28.818 ++ PLATFORM_ID=platform:f38 00:02:28.818 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:28.818 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.818 ++ LOGO=fedora-logo-icon 00:02:28.818 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:28.818 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.818 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:28.818 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.818 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.818 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.818 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:28.818 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.819 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:28.819 ++ SUPPORT_END=2024-05-14 00:02:28.819 ++ VARIANT='Cloud Edition' 00:02:28.819 ++ VARIANT_ID=cloud 00:02:28.819 + uname -a 00:02:28.819 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:28.819 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:29.076 Hugepages 00:02:29.076 node hugesize free / total 00:02:29.076 node0 1048576kB 0 / 0 00:02:29.076 node0 2048kB 0 / 0 00:02:29.076 00:02:29.076 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.076 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:29.076 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:29.334 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:29.334 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:29.334 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3c3n1 00:02:29.334 + rm -f /tmp/spdk-ld-path 00:02:29.334 + source autorun-spdk.conf 00:02:29.334 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.334 ++ SPDK_TEST_NVME=1 00:02:29.334 ++ SPDK_TEST_FTL=1 00:02:29.334 ++ SPDK_TEST_ISAL=1 00:02:29.334 ++ SPDK_RUN_ASAN=1 00:02:29.334 ++ SPDK_RUN_UBSAN=1 00:02:29.334 ++ SPDK_TEST_XNVME=1 00:02:29.334 ++ SPDK_TEST_NVME_FDP=1 00:02:29.334 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.334 ++ RUN_NIGHTLY=1 00:02:29.334 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.334 + [[ -n '' ]] 00:02:29.334 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.334 + for M in /var/spdk/build-*-manifest.txt 00:02:29.334 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.334 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.334 + for M in /var/spdk/build-*-manifest.txt 00:02:29.334 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.334 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.334 ++ uname 00:02:29.334 + [[ Linux == \L\i\n\u\x ]] 00:02:29.334 + sudo dmesg -T 00:02:29.334 + sudo dmesg --clear 00:02:29.334 + dmesg_pid=5281 00:02:29.334 + sudo dmesg -Tw 00:02:29.334 + [[ Fedora Linux == FreeBSD ]] 00:02:29.334 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.334 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.334 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.334 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.334 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.334 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.334 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.334 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.334 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.334 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.334 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.334 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.334 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.334 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.334 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.334 Test configuration: 00:02:29.334 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.334 SPDK_TEST_NVME=1 00:02:29.334 SPDK_TEST_FTL=1 00:02:29.334 SPDK_TEST_ISAL=1 00:02:29.334 SPDK_RUN_ASAN=1 00:02:29.334 SPDK_RUN_UBSAN=1 00:02:29.334 SPDK_TEST_XNVME=1 00:02:29.334 SPDK_TEST_NVME_FDP=1 00:02:29.334 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.592 RUN_NIGHTLY=1 12:23:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.592 12:23:38 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.592 12:23:38 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.592 12:23:38 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.592 12:23:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.592 12:23:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.592 12:23:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.592 12:23:38 -- paths/export.sh@5 -- $ export PATH 00:02:29.592 12:23:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.592 12:23:38 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.592 12:23:38 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:29.592 12:23:38 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715775818.XXXXXX 00:02:29.592 12:23:38 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715775818.xSCOlc 00:02:29.592 12:23:38 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:29.592 12:23:38 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:29.592 12:23:38 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:29.592 12:23:38 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.592 12:23:38 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.592 12:23:38 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:29.592 12:23:38 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:29.592 12:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.592 12:23:38 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:29.592 12:23:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.592 12:23:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.592 12:23:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:29.592 12:23:38 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.592 Wed May 15 12:23:38 PM UTC 2024 00:02:29.592 12:23:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.592 LTS-24-g36faa8c31 00:02:29.592 12:23:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:29.592 12:23:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:29.592 12:23:38 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:29.592 12:23:38 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.592 12:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.592 ************************************ 00:02:29.592 START TEST asan 00:02:29.592 ************************************ 00:02:29.592 using asan 00:02:29.592 12:23:38 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:29.592 00:02:29.592 real 0m0.001s 00:02:29.592 user 0m0.000s 00:02:29.592 sys 0m0.000s 00:02:29.592 12:23:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.592 12:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.592 ************************************ 00:02:29.592 END TEST asan 00:02:29.592 ************************************ 00:02:29.592 12:23:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.592 12:23:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.592 12:23:38 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:29.592 12:23:38 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.592 12:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.592 ************************************ 00:02:29.593 START TEST ubsan 00:02:29.593 ************************************ 00:02:29.593 using ubsan 00:02:29.593 12:23:38 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:29.593 00:02:29.593 real 0m0.000s 00:02:29.593 user 0m0.000s 00:02:29.593 sys 0m0.000s 00:02:29.593 12:23:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.593 12:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.593 ************************************ 00:02:29.593 END TEST ubsan 00:02:29.593 ************************************ 00:02:29.593 12:23:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:29.593 12:23:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:29.593 12:23:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:29.593 12:23:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:29.593 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:29.593 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.158 Using 'verbs' RDMA provider 00:02:45.596 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:57.790 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:57.790 Creating mk/config.mk...done. 00:02:57.790 Creating mk/cc.flags.mk...done. 00:02:57.790 Type 'make' to build. 00:02:57.790 12:24:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:57.790 12:24:05 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:57.790 12:24:05 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:57.790 12:24:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.790 ************************************ 00:02:57.790 START TEST make 00:02:57.790 ************************************ 00:02:57.790 12:24:05 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:57.790 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:57.790 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:57.790 meson setup builddir \ 00:02:57.790 -Dwith-libaio=enabled \ 00:02:57.790 -Dwith-liburing=enabled \ 00:02:57.790 -Dwith-libvfn=disabled \ 00:02:57.790 -Dwith-spdk=false && \ 00:02:57.790 meson compile -C builddir && \ 00:02:57.790 cd -) 00:02:57.790 make[1]: Nothing to be done for 'all'. 00:02:59.688 The Meson build system 00:02:59.688 Version: 1.3.1 00:02:59.688 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:59.688 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:59.688 Build type: native build 00:02:59.688 Project name: xnvme 00:02:59.688 Project version: 0.7.3 00:02:59.688 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:59.688 C linker for the host machine: cc ld.bfd 2.39-16 00:02:59.688 Host machine cpu family: x86_64 00:02:59.688 Host machine cpu: x86_64 00:02:59.688 Message: host_machine.system: linux 00:02:59.688 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:59.688 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:59.688 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.689 Run-time dependency threads found: YES 00:02:59.689 Has header "setupapi.h" : NO 00:02:59.689 Has header "linux/blkzoned.h" : YES 00:02:59.689 Has header "linux/blkzoned.h" : YES (cached) 00:02:59.689 Has header "libaio.h" : YES 00:02:59.689 Library aio found: YES 00:02:59.689 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:59.689 Run-time dependency liburing found: YES 2.2 00:02:59.689 Dependency libvfn skipped: feature with-libvfn disabled 00:02:59.689 Run-time dependency appleframeworks found: NO (tried framework) 00:02:59.689 Run-time dependency appleframeworks found: NO (tried framework) 00:02:59.689 Configuring xnvme_config.h using configuration 00:02:59.689 Configuring xnvme.spec using configuration 00:02:59.689 Run-time dependency bash-completion found: YES 2.11 00:02:59.689 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:59.689 Program cp found: YES (/usr/bin/cp) 00:02:59.689 Has header "winsock2.h" : NO 00:02:59.689 Has header "dbghelp.h" : NO 00:02:59.689 Library rpcrt4 found: NO 00:02:59.689 Library rt found: YES 00:02:59.689 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:59.689 Found CMake: /usr/bin/cmake (3.27.7) 00:02:59.689 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:59.689 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:59.689 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:59.689 Build targets in project: 32 00:02:59.689 00:02:59.689 xnvme 0.7.3 00:02:59.689 00:02:59.689 User defined options 00:02:59.689 with-libaio : enabled 00:02:59.689 with-liburing: enabled 00:02:59.689 with-libvfn : disabled 00:02:59.689 with-spdk : false 00:02:59.689 00:02:59.689 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.254 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:00.254 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:00.254 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:00.254 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:00.254 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:00.254 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:00.254 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:00.254 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:00.254 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:00.254 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:00.254 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:00.254 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:00.513 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:00.513 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:00.513 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:00.513 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:00.513 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:00.513 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:00.513 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:00.513 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:00.513 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:00.513 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:00.513 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:00.513 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:00.513 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:00.513 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:00.513 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:00.513 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:00.513 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:00.771 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:00.771 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:00.771 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:00.771 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:00.771 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:00.771 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:00.771 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:00.771 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:00.771 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:00.771 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:00.771 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:00.771 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:00.771 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:00.771 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:00.771 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:00.771 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:00.771 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:00.771 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:00.771 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:00.771 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:00.771 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:00.771 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:00.771 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:00.771 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:00.771 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:00.771 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:00.771 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:00.771 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:01.029 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:01.029 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:01.029 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:01.029 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:01.029 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:01.029 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:01.029 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:01.029 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:01.029 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:01.029 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:01.029 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:01.029 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:01.029 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:01.029 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:01.029 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:01.029 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:01.029 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:01.288 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:01.288 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:01.288 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:01.288 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:01.288 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:01.288 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:01.288 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:01.288 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:01.288 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:01.288 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:01.288 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:01.288 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:01.288 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:01.550 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:01.550 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:01.550 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:01.550 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:01.550 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:01.550 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:01.550 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:01.550 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:01.550 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:01.550 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:01.550 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:01.550 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:01.550 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:01.550 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:01.550 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:01.550 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:01.550 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:01.550 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:01.550 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:01.550 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:01.550 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:01.550 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:01.550 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:01.550 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:01.550 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:01.550 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:01.550 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:01.550 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:01.550 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:01.550 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:01.812 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:01.812 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:01.812 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:01.812 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:01.812 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:01.812 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:01.812 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:01.812 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:01.812 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:01.812 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:01.812 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:01.812 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:01.812 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:01.812 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:01.812 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:01.812 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:01.812 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:01.812 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:02.070 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:02.070 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:02.070 [137/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:02.070 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:02.070 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:02.070 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:02.070 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:02.070 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:02.070 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:02.070 [144/203] Linking target lib/libxnvme.so 00:03:02.070 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:02.070 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:02.070 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:02.070 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:02.070 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:02.327 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:02.327 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:02.327 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:02.327 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:02.327 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:02.327 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:02.327 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:02.327 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:02.327 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:02.327 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:02.328 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:02.328 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:02.328 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:02.328 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:02.586 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:02.586 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:02.586 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:02.586 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:02.586 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:02.586 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:02.586 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:02.843 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:02.843 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:02.843 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:02.843 [174/203] Linking static target lib/libxnvme.a 00:03:02.843 [175/203] Linking target tests/xnvme_tests_async_intf 00:03:02.843 [176/203] Linking target tests/xnvme_tests_znd_append 00:03:02.843 [177/203] Linking target tests/xnvme_tests_lblk 00:03:02.843 [178/203] Linking target tests/xnvme_tests_enum 00:03:02.843 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:02.843 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:02.843 [181/203] Linking target tests/xnvme_tests_cli 00:03:02.843 [182/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:02.843 [183/203] Linking target tests/xnvme_tests_buf 00:03:02.843 [184/203] Linking target tests/xnvme_tests_ioworker 00:03:02.843 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:02.843 [186/203] Linking target tests/xnvme_tests_scc 00:03:02.843 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:03.128 [188/203] Linking target tools/xnvme 00:03:03.128 [189/203] Linking target tools/xdd 00:03:03.128 [190/203] Linking target tests/xnvme_tests_kvs 00:03:03.128 [191/203] Linking target tools/kvs 00:03:03.128 [192/203] Linking target examples/xnvme_dev 00:03:03.128 [193/203] Linking target examples/xnvme_enum 00:03:03.128 [194/203] Linking target tools/lblk 00:03:03.128 [195/203] Linking target tests/xnvme_tests_map 00:03:03.128 [196/203] Linking target examples/xnvme_hello 00:03:03.128 [197/203] Linking target tools/zoned 00:03:03.128 [198/203] Linking target examples/xnvme_io_async 00:03:03.128 [199/203] Linking target tools/xnvme_file 00:03:03.128 [200/203] Linking target examples/xnvme_single_async 00:03:03.128 [201/203] Linking target examples/xnvme_single_sync 00:03:03.128 [202/203] Linking target examples/zoned_io_async 00:03:03.128 [203/203] Linking target examples/zoned_io_sync 00:03:03.128 INFO: autodetecting backend as ninja 00:03:03.128 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:03.128 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:11.302 The Meson build system 00:03:11.302 Version: 1.3.1 00:03:11.302 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:11.302 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:11.302 Build type: native build 00:03:11.302 Program cat found: YES (/usr/bin/cat) 00:03:11.302 Project name: DPDK 00:03:11.302 Project version: 23.11.0 00:03:11.302 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:11.302 C linker for the host machine: cc ld.bfd 2.39-16 00:03:11.302 Host machine cpu family: x86_64 00:03:11.302 Host machine cpu: x86_64 00:03:11.302 Message: ## Building in Developer Mode ## 00:03:11.302 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.302 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.302 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.302 Program python3 found: YES (/usr/bin/python3) 00:03:11.302 Program cat found: YES (/usr/bin/cat) 00:03:11.302 Compiler for C supports arguments -march=native: YES 00:03:11.302 Checking for size of "void *" : 8 00:03:11.302 Checking for size of "void *" : 8 (cached) 00:03:11.302 Library m found: YES 00:03:11.302 Library numa found: YES 00:03:11.302 Has header "numaif.h" : YES 00:03:11.302 Library fdt found: NO 00:03:11.302 Library execinfo found: NO 00:03:11.302 Has header "execinfo.h" : YES 00:03:11.302 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:11.302 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.302 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.302 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.302 Run-time dependency openssl found: YES 3.0.9 00:03:11.302 Run-time dependency libpcap found: YES 1.10.4 00:03:11.302 Has header "pcap.h" with dependency libpcap: YES 00:03:11.302 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.302 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.302 Compiler for C supports arguments -Wformat: YES 00:03:11.302 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.302 Compiler for C supports arguments -Wformat-security: NO 00:03:11.302 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.302 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.302 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.302 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.302 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.302 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.302 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.302 Compiler for C supports arguments -Wundef: YES 00:03:11.302 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.302 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.302 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.302 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.302 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.302 Program objdump found: YES (/usr/bin/objdump) 00:03:11.302 Compiler for C supports arguments -mavx512f: YES 00:03:11.302 Checking if "AVX512 checking" compiles: YES 00:03:11.302 Fetching value of define "__SSE4_2__" : 1 00:03:11.302 Fetching value of define "__AES__" : 1 00:03:11.302 Fetching value of define "__AVX__" : 1 00:03:11.302 Fetching value of define "__AVX2__" : 1 00:03:11.302 Fetching value of define "__AVX512BW__" : (undefined) 00:03:11.302 Fetching value of define "__AVX512CD__" : (undefined) 00:03:11.302 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:11.302 Fetching value of define "__AVX512F__" : (undefined) 00:03:11.302 Fetching value of define "__AVX512VL__" : (undefined) 00:03:11.302 Fetching value of define "__PCLMUL__" : 1 00:03:11.302 Fetching value of define "__RDRND__" : 1 00:03:11.302 Fetching value of define "__RDSEED__" : 1 00:03:11.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.302 Fetching value of define "__znver1__" : (undefined) 00:03:11.302 Fetching value of define "__znver2__" : (undefined) 00:03:11.302 Fetching value of define "__znver3__" : (undefined) 00:03:11.302 Fetching value of define "__znver4__" : (undefined) 00:03:11.302 Library asan found: YES 00:03:11.302 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.302 Message: lib/log: Defining dependency "log" 00:03:11.303 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.303 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.303 Library rt found: YES 00:03:11.303 Checking for function "getentropy" : NO 00:03:11.303 Message: lib/eal: Defining dependency "eal" 00:03:11.303 Message: lib/ring: Defining dependency "ring" 00:03:11.303 Message: lib/rcu: Defining dependency "rcu" 00:03:11.303 Message: lib/mempool: Defining dependency "mempool" 00:03:11.303 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.303 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.303 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.303 Compiler for C supports arguments -mpclmul: YES 00:03:11.303 Compiler for C supports arguments -maes: YES 00:03:11.303 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.303 Compiler for C supports arguments -mavx512bw: YES 00:03:11.303 Compiler for C supports arguments -mavx512dq: YES 00:03:11.303 Compiler for C supports arguments -mavx512vl: YES 00:03:11.303 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.303 Compiler for C supports arguments -mavx2: YES 00:03:11.303 Compiler for C supports arguments -mavx: YES 00:03:11.303 Message: lib/net: Defining dependency "net" 00:03:11.303 Message: lib/meter: Defining dependency "meter" 00:03:11.303 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.303 Message: lib/pci: Defining dependency "pci" 00:03:11.303 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.303 Message: lib/hash: Defining dependency "hash" 00:03:11.303 Message: lib/timer: Defining dependency "timer" 00:03:11.303 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.303 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.303 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.303 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.303 Message: lib/power: Defining dependency "power" 00:03:11.303 Message: lib/reorder: Defining dependency "reorder" 00:03:11.303 Message: lib/security: Defining dependency "security" 00:03:11.303 Has header "linux/userfaultfd.h" : YES 00:03:11.303 Has header "linux/vduse.h" : YES 00:03:11.303 Message: lib/vhost: Defining dependency "vhost" 00:03:11.303 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.303 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.303 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.303 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.303 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.303 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.303 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.303 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.303 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.303 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.303 Program doxygen found: YES (/usr/bin/doxygen) 00:03:11.303 Configuring doxy-api-html.conf using configuration 00:03:11.303 Configuring doxy-api-man.conf using configuration 00:03:11.303 Program mandb found: YES (/usr/bin/mandb) 00:03:11.303 Program sphinx-build found: NO 00:03:11.303 Configuring rte_build_config.h using configuration 00:03:11.303 Message: 00:03:11.303 ================= 00:03:11.303 Applications Enabled 00:03:11.303 ================= 00:03:11.303 00:03:11.303 apps: 00:03:11.303 00:03:11.303 00:03:11.303 Message: 00:03:11.303 ================= 00:03:11.303 Libraries Enabled 00:03:11.303 ================= 00:03:11.303 00:03:11.303 libs: 00:03:11.303 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.303 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.303 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.303 00:03:11.303 Message: 00:03:11.303 =============== 00:03:11.303 Drivers Enabled 00:03:11.303 =============== 00:03:11.303 00:03:11.303 common: 00:03:11.303 00:03:11.303 bus: 00:03:11.303 pci, vdev, 00:03:11.303 mempool: 00:03:11.303 ring, 00:03:11.303 dma: 00:03:11.303 00:03:11.303 net: 00:03:11.303 00:03:11.303 crypto: 00:03:11.303 00:03:11.303 compress: 00:03:11.303 00:03:11.303 vdpa: 00:03:11.303 00:03:11.303 00:03:11.303 Message: 00:03:11.303 ================= 00:03:11.303 Content Skipped 00:03:11.303 ================= 00:03:11.303 00:03:11.303 apps: 00:03:11.303 dumpcap: explicitly disabled via build config 00:03:11.303 graph: explicitly disabled via build config 00:03:11.303 pdump: explicitly disabled via build config 00:03:11.303 proc-info: explicitly disabled via build config 00:03:11.303 test-acl: explicitly disabled via build config 00:03:11.303 test-bbdev: explicitly disabled via build config 00:03:11.303 test-cmdline: explicitly disabled via build config 00:03:11.303 test-compress-perf: explicitly disabled via build config 00:03:11.303 test-crypto-perf: explicitly disabled via build config 00:03:11.303 test-dma-perf: explicitly disabled via build config 00:03:11.303 test-eventdev: explicitly disabled via build config 00:03:11.303 test-fib: explicitly disabled via build config 00:03:11.303 test-flow-perf: explicitly disabled via build config 00:03:11.303 test-gpudev: explicitly disabled via build config 00:03:11.303 test-mldev: explicitly disabled via build config 00:03:11.303 test-pipeline: explicitly disabled via build config 00:03:11.303 test-pmd: explicitly disabled via build config 00:03:11.303 test-regex: explicitly disabled via build config 00:03:11.303 test-sad: explicitly disabled via build config 00:03:11.303 test-security-perf: explicitly disabled via build config 00:03:11.303 00:03:11.303 libs: 00:03:11.303 metrics: explicitly disabled via build config 00:03:11.303 acl: explicitly disabled via build config 00:03:11.303 bbdev: explicitly disabled via build config 00:03:11.303 bitratestats: explicitly disabled via build config 00:03:11.303 bpf: explicitly disabled via build config 00:03:11.303 cfgfile: explicitly disabled via build config 00:03:11.303 distributor: explicitly disabled via build config 00:03:11.303 efd: explicitly disabled via build config 00:03:11.303 eventdev: explicitly disabled via build config 00:03:11.303 dispatcher: explicitly disabled via build config 00:03:11.303 gpudev: explicitly disabled via build config 00:03:11.303 gro: explicitly disabled via build config 00:03:11.303 gso: explicitly disabled via build config 00:03:11.303 ip_frag: explicitly disabled via build config 00:03:11.303 jobstats: explicitly disabled via build config 00:03:11.303 latencystats: explicitly disabled via build config 00:03:11.303 lpm: explicitly disabled via build config 00:03:11.303 member: explicitly disabled via build config 00:03:11.303 pcapng: explicitly disabled via build config 00:03:11.303 rawdev: explicitly disabled via build config 00:03:11.303 regexdev: explicitly disabled via build config 00:03:11.303 mldev: explicitly disabled via build config 00:03:11.303 rib: explicitly disabled via build config 00:03:11.303 sched: explicitly disabled via build config 00:03:11.303 stack: explicitly disabled via build config 00:03:11.303 ipsec: explicitly disabled via build config 00:03:11.303 pdcp: explicitly disabled via build config 00:03:11.303 fib: explicitly disabled via build config 00:03:11.303 port: explicitly disabled via build config 00:03:11.303 pdump: explicitly disabled via build config 00:03:11.303 table: explicitly disabled via build config 00:03:11.303 pipeline: explicitly disabled via build config 00:03:11.303 graph: explicitly disabled via build config 00:03:11.303 node: explicitly disabled via build config 00:03:11.303 00:03:11.303 drivers: 00:03:11.303 common/cpt: not in enabled drivers build config 00:03:11.303 common/dpaax: not in enabled drivers build config 00:03:11.303 common/iavf: not in enabled drivers build config 00:03:11.303 common/idpf: not in enabled drivers build config 00:03:11.303 common/mvep: not in enabled drivers build config 00:03:11.303 common/octeontx: not in enabled drivers build config 00:03:11.303 bus/auxiliary: not in enabled drivers build config 00:03:11.303 bus/cdx: not in enabled drivers build config 00:03:11.303 bus/dpaa: not in enabled drivers build config 00:03:11.303 bus/fslmc: not in enabled drivers build config 00:03:11.303 bus/ifpga: not in enabled drivers build config 00:03:11.303 bus/platform: not in enabled drivers build config 00:03:11.303 bus/vmbus: not in enabled drivers build config 00:03:11.303 common/cnxk: not in enabled drivers build config 00:03:11.303 common/mlx5: not in enabled drivers build config 00:03:11.303 common/nfp: not in enabled drivers build config 00:03:11.303 common/qat: not in enabled drivers build config 00:03:11.303 common/sfc_efx: not in enabled drivers build config 00:03:11.303 mempool/bucket: not in enabled drivers build config 00:03:11.303 mempool/cnxk: not in enabled drivers build config 00:03:11.303 mempool/dpaa: not in enabled drivers build config 00:03:11.303 mempool/dpaa2: not in enabled drivers build config 00:03:11.303 mempool/octeontx: not in enabled drivers build config 00:03:11.303 mempool/stack: not in enabled drivers build config 00:03:11.303 dma/cnxk: not in enabled drivers build config 00:03:11.303 dma/dpaa: not in enabled drivers build config 00:03:11.303 dma/dpaa2: not in enabled drivers build config 00:03:11.303 dma/hisilicon: not in enabled drivers build config 00:03:11.303 dma/idxd: not in enabled drivers build config 00:03:11.303 dma/ioat: not in enabled drivers build config 00:03:11.303 dma/skeleton: not in enabled drivers build config 00:03:11.303 net/af_packet: not in enabled drivers build config 00:03:11.303 net/af_xdp: not in enabled drivers build config 00:03:11.303 net/ark: not in enabled drivers build config 00:03:11.303 net/atlantic: not in enabled drivers build config 00:03:11.303 net/avp: not in enabled drivers build config 00:03:11.303 net/axgbe: not in enabled drivers build config 00:03:11.303 net/bnx2x: not in enabled drivers build config 00:03:11.303 net/bnxt: not in enabled drivers build config 00:03:11.303 net/bonding: not in enabled drivers build config 00:03:11.303 net/cnxk: not in enabled drivers build config 00:03:11.303 net/cpfl: not in enabled drivers build config 00:03:11.303 net/cxgbe: not in enabled drivers build config 00:03:11.303 net/dpaa: not in enabled drivers build config 00:03:11.303 net/dpaa2: not in enabled drivers build config 00:03:11.303 net/e1000: not in enabled drivers build config 00:03:11.303 net/ena: not in enabled drivers build config 00:03:11.303 net/enetc: not in enabled drivers build config 00:03:11.303 net/enetfec: not in enabled drivers build config 00:03:11.303 net/enic: not in enabled drivers build config 00:03:11.303 net/failsafe: not in enabled drivers build config 00:03:11.303 net/fm10k: not in enabled drivers build config 00:03:11.304 net/gve: not in enabled drivers build config 00:03:11.304 net/hinic: not in enabled drivers build config 00:03:11.304 net/hns3: not in enabled drivers build config 00:03:11.304 net/i40e: not in enabled drivers build config 00:03:11.304 net/iavf: not in enabled drivers build config 00:03:11.304 net/ice: not in enabled drivers build config 00:03:11.304 net/idpf: not in enabled drivers build config 00:03:11.304 net/igc: not in enabled drivers build config 00:03:11.304 net/ionic: not in enabled drivers build config 00:03:11.304 net/ipn3ke: not in enabled drivers build config 00:03:11.304 net/ixgbe: not in enabled drivers build config 00:03:11.304 net/mana: not in enabled drivers build config 00:03:11.304 net/memif: not in enabled drivers build config 00:03:11.304 net/mlx4: not in enabled drivers build config 00:03:11.304 net/mlx5: not in enabled drivers build config 00:03:11.304 net/mvneta: not in enabled drivers build config 00:03:11.304 net/mvpp2: not in enabled drivers build config 00:03:11.304 net/netvsc: not in enabled drivers build config 00:03:11.304 net/nfb: not in enabled drivers build config 00:03:11.304 net/nfp: not in enabled drivers build config 00:03:11.304 net/ngbe: not in enabled drivers build config 00:03:11.304 net/null: not in enabled drivers build config 00:03:11.304 net/octeontx: not in enabled drivers build config 00:03:11.304 net/octeon_ep: not in enabled drivers build config 00:03:11.304 net/pcap: not in enabled drivers build config 00:03:11.304 net/pfe: not in enabled drivers build config 00:03:11.304 net/qede: not in enabled drivers build config 00:03:11.304 net/ring: not in enabled drivers build config 00:03:11.304 net/sfc: not in enabled drivers build config 00:03:11.304 net/softnic: not in enabled drivers build config 00:03:11.304 net/tap: not in enabled drivers build config 00:03:11.304 net/thunderx: not in enabled drivers build config 00:03:11.304 net/txgbe: not in enabled drivers build config 00:03:11.304 net/vdev_netvsc: not in enabled drivers build config 00:03:11.304 net/vhost: not in enabled drivers build config 00:03:11.304 net/virtio: not in enabled drivers build config 00:03:11.304 net/vmxnet3: not in enabled drivers build config 00:03:11.304 raw/*: missing internal dependency, "rawdev" 00:03:11.304 crypto/armv8: not in enabled drivers build config 00:03:11.304 crypto/bcmfs: not in enabled drivers build config 00:03:11.304 crypto/caam_jr: not in enabled drivers build config 00:03:11.304 crypto/ccp: not in enabled drivers build config 00:03:11.304 crypto/cnxk: not in enabled drivers build config 00:03:11.304 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.304 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.304 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.304 crypto/mlx5: not in enabled drivers build config 00:03:11.304 crypto/mvsam: not in enabled drivers build config 00:03:11.304 crypto/nitrox: not in enabled drivers build config 00:03:11.304 crypto/null: not in enabled drivers build config 00:03:11.304 crypto/octeontx: not in enabled drivers build config 00:03:11.304 crypto/openssl: not in enabled drivers build config 00:03:11.304 crypto/scheduler: not in enabled drivers build config 00:03:11.304 crypto/uadk: not in enabled drivers build config 00:03:11.304 crypto/virtio: not in enabled drivers build config 00:03:11.304 compress/isal: not in enabled drivers build config 00:03:11.304 compress/mlx5: not in enabled drivers build config 00:03:11.304 compress/octeontx: not in enabled drivers build config 00:03:11.304 compress/zlib: not in enabled drivers build config 00:03:11.304 regex/*: missing internal dependency, "regexdev" 00:03:11.304 ml/*: missing internal dependency, "mldev" 00:03:11.304 vdpa/ifc: not in enabled drivers build config 00:03:11.304 vdpa/mlx5: not in enabled drivers build config 00:03:11.304 vdpa/nfp: not in enabled drivers build config 00:03:11.304 vdpa/sfc: not in enabled drivers build config 00:03:11.304 event/*: missing internal dependency, "eventdev" 00:03:11.304 baseband/*: missing internal dependency, "bbdev" 00:03:11.304 gpu/*: missing internal dependency, "gpudev" 00:03:11.304 00:03:11.304 00:03:11.304 Build targets in project: 85 00:03:11.304 00:03:11.304 DPDK 23.11.0 00:03:11.304 00:03:11.304 User defined options 00:03:11.304 buildtype : debug 00:03:11.304 default_library : shared 00:03:11.304 libdir : lib 00:03:11.304 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:11.304 b_sanitize : address 00:03:11.304 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:03:11.304 c_link_args : 00:03:11.304 cpu_instruction_set: native 00:03:11.304 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:11.304 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:11.304 enable_docs : false 00:03:11.304 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:11.304 enable_kmods : false 00:03:11.304 tests : false 00:03:11.304 00:03:11.304 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.304 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:11.304 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.304 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:11.304 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.304 [4/265] Linking static target lib/librte_kvargs.a 00:03:11.304 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.304 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.562 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.562 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.562 [9/265] Linking static target lib/librte_log.a 00:03:11.562 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.820 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.078 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:12.336 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.336 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:12.336 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:12.336 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:12.336 [17/265] Linking static target lib/librte_telemetry.a 00:03:12.336 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.336 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.600 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:12.600 [21/265] Linking target lib/librte_log.so.24.0 00:03:12.600 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:12.600 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:12.866 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:12.866 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:12.866 [26/265] Linking target lib/librte_kvargs.so.24.0 00:03:13.124 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:13.124 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.124 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.381 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.381 [31/265] Linking target lib/librte_telemetry.so.24.0 00:03:13.381 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:13.381 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:13.381 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:13.381 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:13.639 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:13.639 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:13.639 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:13.639 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:13.897 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:13.897 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:13.897 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:13.897 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:13.897 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.155 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:14.155 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.412 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:14.671 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:14.671 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:14.671 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:14.671 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.671 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.928 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:14.928 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:14.928 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.186 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.186 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:15.186 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:15.186 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:15.443 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:15.443 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.443 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:15.443 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:15.700 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:15.700 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.700 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:15.956 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:15.956 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:16.213 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:16.213 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:16.213 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.470 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:16.470 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.470 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:16.470 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:16.470 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:16.728 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:16.728 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:16.728 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:16.987 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:16.987 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:16.987 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.244 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.502 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.502 [85/265] Linking static target lib/librte_ring.a 00:03:17.502 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.502 [87/265] Linking static target lib/librte_eal.a 00:03:17.761 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.761 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:17.761 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:18.019 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:18.019 [92/265] Linking static target lib/librte_rcu.a 00:03:18.019 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:18.019 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.019 [95/265] Linking static target lib/librte_mempool.a 00:03:18.277 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:18.277 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:18.535 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.535 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:18.535 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:18.793 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:18.793 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:18.793 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.793 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:19.051 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:19.051 [106/265] Linking static target lib/librte_mbuf.a 00:03:19.309 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:19.309 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.309 [109/265] Linking static target lib/librte_net.a 00:03:19.309 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:19.309 [111/265] Linking static target lib/librte_meter.a 00:03:19.309 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.566 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:19.566 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.824 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.824 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:19.824 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:19.824 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.081 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.081 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:20.657 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.657 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.657 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.914 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:20.914 [125/265] Linking static target lib/librte_pci.a 00:03:20.914 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:21.173 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:21.173 [128/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.173 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:21.173 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:21.173 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:21.173 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:21.430 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:21.430 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:21.430 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:21.430 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:21.430 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:21.430 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:21.430 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:21.430 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:21.688 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:21.688 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:21.946 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:21.946 [144/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:21.946 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:21.946 [146/265] Linking static target lib/librte_cmdline.a 00:03:22.510 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:22.510 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:22.510 [149/265] Linking static target lib/librte_timer.a 00:03:22.510 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:22.510 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:22.767 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:22.767 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:22.767 [154/265] Linking static target lib/librte_compressdev.a 00:03:23.025 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:23.025 [156/265] Linking static target lib/librte_ethdev.a 00:03:23.283 [157/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.283 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:23.283 [159/265] Linking static target lib/librte_hash.a 00:03:23.283 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:23.541 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:23.541 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.541 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:23.847 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:23.847 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:23.847 [166/265] Linking static target lib/librte_dmadev.a 00:03:23.847 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.104 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:24.104 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.104 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:24.104 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:24.362 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:24.362 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.619 [174/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.619 [175/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:24.619 [176/265] Linking static target lib/librte_cryptodev.a 00:03:24.619 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:24.619 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:24.877 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:24.877 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:24.877 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:25.153 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:25.153 [183/265] Linking static target lib/librte_power.a 00:03:25.153 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:25.153 [185/265] Linking static target lib/librte_reorder.a 00:03:25.411 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:25.411 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:25.668 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:25.668 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:25.668 [190/265] Linking static target lib/librte_security.a 00:03:25.668 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.925 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:26.182 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.439 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.439 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:26.439 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:26.697 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:26.697 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:26.955 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.955 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.955 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.955 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:27.213 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:27.213 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:27.471 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:27.471 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:27.471 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:27.471 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:27.745 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:27.745 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:27.745 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.745 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.745 [213/265] Linking static target drivers/librte_bus_vdev.a 00:03:27.745 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.745 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.745 [216/265] Linking static target drivers/librte_bus_pci.a 00:03:27.745 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:27.745 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.003 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.003 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.003 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.003 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.003 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:28.260 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.194 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.451 [226/265] Linking target lib/librte_eal.so.24.0 00:03:29.451 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:29.451 [228/265] Linking target lib/librte_pci.so.24.0 00:03:29.451 [229/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:29.451 [230/265] Linking target lib/librte_ring.so.24.0 00:03:29.451 [231/265] Linking target lib/librte_timer.so.24.0 00:03:29.451 [232/265] Linking target lib/librte_meter.so.24.0 00:03:29.708 [233/265] Linking target lib/librte_dmadev.so.24.0 00:03:29.709 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:29.709 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:29.709 [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:29.709 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:29.709 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:29.709 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:29.709 [240/265] Linking target lib/librte_rcu.so.24.0 00:03:29.709 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:29.709 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:29.969 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:29.969 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:29.969 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:29.969 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:30.226 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:30.226 [248/265] Linking target lib/librte_net.so.24.0 00:03:30.226 [249/265] Linking target lib/librte_cryptodev.so.24.0 00:03:30.226 [250/265] Linking target lib/librte_compressdev.so.24.0 00:03:30.226 [251/265] Linking target lib/librte_reorder.so.24.0 00:03:30.484 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:30.484 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:30.484 [254/265] Linking target lib/librte_hash.so.24.0 00:03:30.484 [255/265] Linking target lib/librte_cmdline.so.24.0 00:03:30.484 [256/265] Linking target lib/librte_security.so.24.0 00:03:30.741 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.741 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:30.741 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:30.999 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:30.999 [261/265] Linking target lib/librte_power.so.24.0 00:03:34.277 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.277 [263/265] Linking static target lib/librte_vhost.a 00:03:35.650 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.908 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:35.908 INFO: autodetecting backend as ninja 00:03:35.908 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:36.841 CC lib/ut_mock/mock.o 00:03:36.841 CC lib/ut/ut.o 00:03:36.841 CC lib/log/log.o 00:03:36.841 CC lib/log/log_flags.o 00:03:36.841 CC lib/log/log_deprecated.o 00:03:37.099 LIB libspdk_ut_mock.a 00:03:37.099 LIB libspdk_ut.a 00:03:37.099 LIB libspdk_log.a 00:03:37.099 SO libspdk_ut_mock.so.5.0 00:03:37.099 SO libspdk_ut.so.1.0 00:03:37.099 SO libspdk_log.so.6.1 00:03:37.099 SYMLINK libspdk_ut_mock.so 00:03:37.099 SYMLINK libspdk_ut.so 00:03:37.099 SYMLINK libspdk_log.so 00:03:37.356 CXX lib/trace_parser/trace.o 00:03:37.356 CC lib/util/base64.o 00:03:37.356 CC lib/util/bit_array.o 00:03:37.356 CC lib/ioat/ioat.o 00:03:37.356 CC lib/util/cpuset.o 00:03:37.356 CC lib/util/crc16.o 00:03:37.356 CC lib/util/crc32c.o 00:03:37.356 CC lib/util/crc32.o 00:03:37.356 CC lib/dma/dma.o 00:03:37.356 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.614 CC lib/vfio_user/host/vfio_user.o 00:03:37.614 CC lib/util/crc32_ieee.o 00:03:37.614 CC lib/util/crc64.o 00:03:37.614 LIB libspdk_dma.a 00:03:37.614 CC lib/util/dif.o 00:03:37.614 CC lib/util/fd.o 00:03:37.614 CC lib/util/file.o 00:03:37.614 SO libspdk_dma.so.3.0 00:03:37.614 CC lib/util/hexlify.o 00:03:37.873 SYMLINK libspdk_dma.so 00:03:37.873 CC lib/util/iov.o 00:03:37.873 CC lib/util/math.o 00:03:37.873 CC lib/util/pipe.o 00:03:37.873 LIB libspdk_ioat.a 00:03:37.873 CC lib/util/strerror_tls.o 00:03:37.873 CC lib/util/string.o 00:03:37.873 LIB libspdk_vfio_user.a 00:03:37.873 SO libspdk_ioat.so.6.0 00:03:37.873 SO libspdk_vfio_user.so.4.0 00:03:37.873 SYMLINK libspdk_ioat.so 00:03:37.873 CC lib/util/uuid.o 00:03:37.873 CC lib/util/fd_group.o 00:03:37.873 CC lib/util/xor.o 00:03:37.873 SYMLINK libspdk_vfio_user.so 00:03:37.873 CC lib/util/zipf.o 00:03:38.439 LIB libspdk_util.a 00:03:38.696 LIB libspdk_trace_parser.a 00:03:38.696 SO libspdk_util.so.8.0 00:03:38.696 SO libspdk_trace_parser.so.4.0 00:03:38.696 SYMLINK libspdk_trace_parser.so 00:03:38.696 SYMLINK libspdk_util.so 00:03:38.954 CC lib/rdma/common.o 00:03:38.954 CC lib/rdma/rdma_verbs.o 00:03:38.954 CC lib/json/json_parse.o 00:03:38.954 CC lib/env_dpdk/env.o 00:03:38.954 CC lib/json/json_util.o 00:03:38.954 CC lib/env_dpdk/memory.o 00:03:38.954 CC lib/env_dpdk/pci.o 00:03:38.954 CC lib/vmd/vmd.o 00:03:38.954 CC lib/conf/conf.o 00:03:38.954 CC lib/idxd/idxd.o 00:03:39.214 CC lib/vmd/led.o 00:03:39.214 LIB libspdk_conf.a 00:03:39.214 CC lib/json/json_write.o 00:03:39.214 SO libspdk_conf.so.5.0 00:03:39.214 CC lib/env_dpdk/init.o 00:03:39.214 SYMLINK libspdk_conf.so 00:03:39.214 CC lib/env_dpdk/threads.o 00:03:39.472 LIB libspdk_rdma.a 00:03:39.472 CC lib/idxd/idxd_user.o 00:03:39.472 SO libspdk_rdma.so.5.0 00:03:39.472 CC lib/env_dpdk/pci_ioat.o 00:03:39.472 CC lib/env_dpdk/pci_virtio.o 00:03:39.472 SYMLINK libspdk_rdma.so 00:03:39.472 CC lib/env_dpdk/pci_vmd.o 00:03:39.472 LIB libspdk_json.a 00:03:39.730 SO libspdk_json.so.5.1 00:03:39.730 CC lib/env_dpdk/pci_idxd.o 00:03:39.730 CC lib/env_dpdk/pci_event.o 00:03:39.730 SYMLINK libspdk_json.so 00:03:39.730 CC lib/env_dpdk/sigbus_handler.o 00:03:39.730 CC lib/env_dpdk/pci_dpdk.o 00:03:39.730 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.730 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.730 LIB libspdk_idxd.a 00:03:39.730 SO libspdk_idxd.so.11.0 00:03:39.989 CC lib/jsonrpc/jsonrpc_server.o 00:03:39.989 CC lib/jsonrpc/jsonrpc_client.o 00:03:39.989 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:39.989 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:39.989 SYMLINK libspdk_idxd.so 00:03:39.989 LIB libspdk_vmd.a 00:03:39.989 SO libspdk_vmd.so.5.0 00:03:39.989 SYMLINK libspdk_vmd.so 00:03:40.250 LIB libspdk_jsonrpc.a 00:03:40.250 SO libspdk_jsonrpc.so.5.1 00:03:40.250 SYMLINK libspdk_jsonrpc.so 00:03:40.509 CC lib/rpc/rpc.o 00:03:40.768 LIB libspdk_rpc.a 00:03:40.768 SO libspdk_rpc.so.5.0 00:03:40.768 SYMLINK libspdk_rpc.so 00:03:40.768 LIB libspdk_env_dpdk.a 00:03:41.026 SO libspdk_env_dpdk.so.13.0 00:03:41.026 CC lib/notify/notify.o 00:03:41.026 CC lib/sock/sock.o 00:03:41.026 CC lib/sock/sock_rpc.o 00:03:41.026 CC lib/notify/notify_rpc.o 00:03:41.026 CC lib/trace/trace.o 00:03:41.026 CC lib/trace/trace_flags.o 00:03:41.026 CC lib/trace/trace_rpc.o 00:03:41.026 SYMLINK libspdk_env_dpdk.so 00:03:41.284 LIB libspdk_notify.a 00:03:41.284 SO libspdk_notify.so.5.0 00:03:41.284 LIB libspdk_trace.a 00:03:41.284 SYMLINK libspdk_notify.so 00:03:41.284 SO libspdk_trace.so.9.0 00:03:41.551 SYMLINK libspdk_trace.so 00:03:41.551 CC lib/thread/thread.o 00:03:41.551 CC lib/thread/iobuf.o 00:03:41.551 LIB libspdk_sock.a 00:03:41.551 SO libspdk_sock.so.8.0 00:03:41.812 SYMLINK libspdk_sock.so 00:03:41.812 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.812 CC lib/nvme/nvme_ctrlr.o 00:03:41.812 CC lib/nvme/nvme_fabric.o 00:03:41.812 CC lib/nvme/nvme_ns_cmd.o 00:03:41.812 CC lib/nvme/nvme_ns.o 00:03:41.812 CC lib/nvme/nvme_pcie_common.o 00:03:41.812 CC lib/nvme/nvme_pcie.o 00:03:41.812 CC lib/nvme/nvme_qpair.o 00:03:42.069 CC lib/nvme/nvme.o 00:03:43.004 CC lib/nvme/nvme_quirks.o 00:03:43.004 CC lib/nvme/nvme_transport.o 00:03:43.004 CC lib/nvme/nvme_discovery.o 00:03:43.004 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.004 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.004 CC lib/nvme/nvme_tcp.o 00:03:43.261 CC lib/nvme/nvme_opal.o 00:03:43.519 CC lib/nvme/nvme_io_msg.o 00:03:43.519 CC lib/nvme/nvme_poll_group.o 00:03:43.519 CC lib/nvme/nvme_zns.o 00:03:43.519 CC lib/nvme/nvme_cuse.o 00:03:43.777 CC lib/nvme/nvme_vfio_user.o 00:03:43.777 LIB libspdk_thread.a 00:03:43.777 CC lib/nvme/nvme_rdma.o 00:03:43.777 SO libspdk_thread.so.9.0 00:03:44.035 SYMLINK libspdk_thread.so 00:03:44.035 CC lib/accel/accel.o 00:03:44.035 CC lib/blob/blobstore.o 00:03:44.293 CC lib/accel/accel_rpc.o 00:03:44.293 CC lib/accel/accel_sw.o 00:03:44.293 CC lib/blob/request.o 00:03:44.293 CC lib/blob/zeroes.o 00:03:44.293 CC lib/blob/blob_bs_dev.o 00:03:44.551 CC lib/init/subsystem.o 00:03:44.551 CC lib/init/subsystem_rpc.o 00:03:44.551 CC lib/init/json_config.o 00:03:44.808 CC lib/init/rpc.o 00:03:44.808 CC lib/virtio/virtio.o 00:03:44.808 CC lib/virtio/virtio_vhost_user.o 00:03:44.808 CC lib/virtio/virtio_vfio_user.o 00:03:44.808 CC lib/virtio/virtio_pci.o 00:03:44.808 LIB libspdk_init.a 00:03:45.066 SO libspdk_init.so.4.0 00:03:45.066 SYMLINK libspdk_init.so 00:03:45.323 CC lib/event/app.o 00:03:45.323 CC lib/event/app_rpc.o 00:03:45.323 CC lib/event/reactor.o 00:03:45.323 CC lib/event/scheduler_static.o 00:03:45.323 CC lib/event/log_rpc.o 00:03:45.323 LIB libspdk_virtio.a 00:03:45.323 SO libspdk_virtio.so.6.0 00:03:45.323 SYMLINK libspdk_virtio.so 00:03:45.323 LIB libspdk_accel.a 00:03:45.323 SO libspdk_accel.so.14.0 00:03:45.580 SYMLINK libspdk_accel.so 00:03:45.580 LIB libspdk_nvme.a 00:03:45.580 CC lib/bdev/bdev.o 00:03:45.580 CC lib/bdev/bdev_rpc.o 00:03:45.580 CC lib/bdev/bdev_zone.o 00:03:45.580 CC lib/bdev/part.o 00:03:45.580 CC lib/bdev/scsi_nvme.o 00:03:45.838 LIB libspdk_event.a 00:03:45.838 SO libspdk_nvme.so.12.0 00:03:45.838 SO libspdk_event.so.12.0 00:03:45.838 SYMLINK libspdk_event.so 00:03:46.097 SYMLINK libspdk_nvme.so 00:03:48.625 LIB libspdk_blob.a 00:03:48.625 SO libspdk_blob.so.10.1 00:03:48.625 SYMLINK libspdk_blob.so 00:03:48.625 CC lib/lvol/lvol.o 00:03:48.625 CC lib/blobfs/tree.o 00:03:48.625 CC lib/blobfs/blobfs.o 00:03:49.191 LIB libspdk_bdev.a 00:03:49.191 SO libspdk_bdev.so.14.0 00:03:49.191 SYMLINK libspdk_bdev.so 00:03:49.449 CC lib/nbd/nbd.o 00:03:49.449 CC lib/nbd/nbd_rpc.o 00:03:49.449 CC lib/ftl/ftl_core.o 00:03:49.449 CC lib/ublk/ublk.o 00:03:49.449 CC lib/ftl/ftl_layout.o 00:03:49.449 CC lib/nvmf/ctrlr.o 00:03:49.449 CC lib/ftl/ftl_init.o 00:03:49.449 CC lib/scsi/dev.o 00:03:49.707 LIB libspdk_blobfs.a 00:03:49.707 CC lib/scsi/lun.o 00:03:49.707 SO libspdk_blobfs.so.9.0 00:03:49.707 CC lib/scsi/port.o 00:03:49.707 CC lib/ublk/ublk_rpc.o 00:03:49.707 SYMLINK libspdk_blobfs.so 00:03:49.707 CC lib/ftl/ftl_debug.o 00:03:50.009 LIB libspdk_lvol.a 00:03:50.009 SO libspdk_lvol.so.9.1 00:03:50.009 CC lib/ftl/ftl_io.o 00:03:50.009 LIB libspdk_nbd.a 00:03:50.009 CC lib/nvmf/ctrlr_discovery.o 00:03:50.009 SYMLINK libspdk_lvol.so 00:03:50.009 CC lib/nvmf/ctrlr_bdev.o 00:03:50.009 CC lib/ftl/ftl_sb.o 00:03:50.009 SO libspdk_nbd.so.6.0 00:03:50.009 CC lib/ftl/ftl_l2p.o 00:03:50.009 SYMLINK libspdk_nbd.so 00:03:50.009 CC lib/ftl/ftl_l2p_flat.o 00:03:50.009 CC lib/scsi/scsi.o 00:03:50.009 CC lib/ftl/ftl_nv_cache.o 00:03:50.297 CC lib/ftl/ftl_band.o 00:03:50.297 CC lib/ftl/ftl_band_ops.o 00:03:50.297 CC lib/scsi/scsi_bdev.o 00:03:50.297 LIB libspdk_ublk.a 00:03:50.297 CC lib/ftl/ftl_writer.o 00:03:50.297 SO libspdk_ublk.so.2.0 00:03:50.297 CC lib/ftl/ftl_rq.o 00:03:50.297 SYMLINK libspdk_ublk.so 00:03:50.297 CC lib/ftl/ftl_reloc.o 00:03:50.555 CC lib/ftl/ftl_l2p_cache.o 00:03:50.555 CC lib/scsi/scsi_pr.o 00:03:50.555 CC lib/ftl/ftl_p2l.o 00:03:50.555 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.555 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.813 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.813 CC lib/scsi/scsi_rpc.o 00:03:50.813 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.813 CC lib/nvmf/subsystem.o 00:03:51.072 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.072 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.072 CC lib/scsi/task.o 00:03:51.072 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.072 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.072 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.072 CC lib/nvmf/nvmf.o 00:03:51.330 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.330 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.330 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.330 LIB libspdk_scsi.a 00:03:51.330 CC lib/nvmf/nvmf_rpc.o 00:03:51.330 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.330 SO libspdk_scsi.so.8.0 00:03:51.588 CC lib/ftl/utils/ftl_conf.o 00:03:51.588 CC lib/nvmf/transport.o 00:03:51.588 SYMLINK libspdk_scsi.so 00:03:51.588 CC lib/nvmf/tcp.o 00:03:51.588 CC lib/iscsi/conn.o 00:03:51.588 CC lib/iscsi/init_grp.o 00:03:51.588 CC lib/ftl/utils/ftl_md.o 00:03:51.588 CC lib/vhost/vhost.o 00:03:51.846 CC lib/iscsi/iscsi.o 00:03:52.104 CC lib/vhost/vhost_rpc.o 00:03:52.104 CC lib/ftl/utils/ftl_mempool.o 00:03:52.362 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.362 CC lib/ftl/utils/ftl_property.o 00:03:52.362 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.362 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.362 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.362 CC lib/vhost/vhost_scsi.o 00:03:52.620 CC lib/vhost/vhost_blk.o 00:03:52.620 CC lib/vhost/rte_vhost_user.o 00:03:52.620 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.620 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.620 CC lib/iscsi/md5.o 00:03:52.620 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.620 CC lib/iscsi/param.o 00:03:52.879 CC lib/nvmf/rdma.o 00:03:52.879 CC lib/iscsi/portal_grp.o 00:03:52.879 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.879 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.147 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.147 CC lib/iscsi/tgt_node.o 00:03:53.423 CC lib/iscsi/iscsi_subsystem.o 00:03:53.423 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.423 CC lib/iscsi/iscsi_rpc.o 00:03:53.423 CC lib/ftl/base/ftl_base_dev.o 00:03:53.423 CC lib/iscsi/task.o 00:03:53.681 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.681 CC lib/ftl/ftl_trace.o 00:03:53.681 LIB libspdk_vhost.a 00:03:53.938 SO libspdk_vhost.so.7.1 00:03:53.938 LIB libspdk_ftl.a 00:03:53.938 LIB libspdk_iscsi.a 00:03:53.938 SO libspdk_iscsi.so.7.0 00:03:53.938 SYMLINK libspdk_vhost.so 00:03:54.214 SO libspdk_ftl.so.8.0 00:03:54.214 SYMLINK libspdk_iscsi.so 00:03:54.784 SYMLINK libspdk_ftl.so 00:03:55.717 LIB libspdk_nvmf.a 00:03:55.717 SO libspdk_nvmf.so.17.0 00:03:55.976 SYMLINK libspdk_nvmf.so 00:03:56.234 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.492 CC module/accel/error/accel_error.o 00:03:56.492 CC module/accel/ioat/accel_ioat.o 00:03:56.492 CC module/accel/iaa/accel_iaa.o 00:03:56.492 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.492 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.492 CC module/sock/posix/posix.o 00:03:56.492 CC module/accel/dsa/accel_dsa.o 00:03:56.492 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.492 CC module/blob/bdev/blob_bdev.o 00:03:56.492 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.492 LIB libspdk_env_dpdk_rpc.a 00:03:56.492 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:56.493 CC module/accel/error/accel_error_rpc.o 00:03:56.762 SO libspdk_env_dpdk_rpc.so.5.0 00:03:56.762 LIB libspdk_scheduler_dynamic.a 00:03:56.762 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.762 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.762 SO libspdk_scheduler_dynamic.so.3.0 00:03:56.762 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.762 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.762 LIB libspdk_scheduler_gscheduler.a 00:03:56.762 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.762 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.762 SO libspdk_scheduler_gscheduler.so.3.0 00:03:56.762 LIB libspdk_accel_error.a 00:03:56.762 LIB libspdk_accel_iaa.a 00:03:56.762 SO libspdk_accel_error.so.1.0 00:03:56.762 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.762 SO libspdk_accel_iaa.so.2.0 00:03:57.029 LIB libspdk_accel_dsa.a 00:03:57.029 LIB libspdk_accel_ioat.a 00:03:57.029 SO libspdk_accel_dsa.so.4.0 00:03:57.029 SO libspdk_accel_ioat.so.5.0 00:03:57.029 SYMLINK libspdk_accel_error.so 00:03:57.029 SYMLINK libspdk_accel_iaa.so 00:03:57.029 LIB libspdk_blob_bdev.a 00:03:57.029 SO libspdk_blob_bdev.so.10.1 00:03:57.029 SYMLINK libspdk_blob_bdev.so 00:03:57.029 SYMLINK libspdk_accel_ioat.so 00:03:57.029 SYMLINK libspdk_accel_dsa.so 00:03:57.287 CC module/bdev/error/vbdev_error.o 00:03:57.287 CC module/bdev/nvme/bdev_nvme.o 00:03:57.287 CC module/bdev/delay/vbdev_delay.o 00:03:57.287 CC module/bdev/lvol/vbdev_lvol.o 00:03:57.287 CC module/bdev/passthru/vbdev_passthru.o 00:03:57.287 CC module/bdev/null/bdev_null.o 00:03:57.287 CC module/blobfs/bdev/blobfs_bdev.o 00:03:57.287 CC module/bdev/malloc/bdev_malloc.o 00:03:57.287 CC module/bdev/gpt/gpt.o 00:03:57.545 LIB libspdk_sock_posix.a 00:03:57.545 CC module/bdev/null/bdev_null_rpc.o 00:03:57.545 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:57.545 CC module/bdev/gpt/vbdev_gpt.o 00:03:57.545 SO libspdk_sock_posix.so.5.0 00:03:57.803 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:57.803 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:57.803 SYMLINK libspdk_sock_posix.so 00:03:57.803 CC module/bdev/error/vbdev_error_rpc.o 00:03:57.803 LIB libspdk_blobfs_bdev.a 00:03:57.803 SO libspdk_blobfs_bdev.so.5.0 00:03:57.803 LIB libspdk_bdev_passthru.a 00:03:57.803 CC module/bdev/raid/bdev_raid.o 00:03:57.803 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:57.803 SO libspdk_bdev_passthru.so.5.0 00:03:57.803 LIB libspdk_bdev_null.a 00:03:58.061 LIB libspdk_bdev_gpt.a 00:03:58.061 SO libspdk_bdev_null.so.5.0 00:03:58.061 LIB libspdk_bdev_error.a 00:03:58.061 SYMLINK libspdk_blobfs_bdev.so 00:03:58.061 SYMLINK libspdk_bdev_passthru.so 00:03:58.061 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:58.061 LIB libspdk_bdev_malloc.a 00:03:58.061 SO libspdk_bdev_error.so.5.0 00:03:58.061 SO libspdk_bdev_gpt.so.5.0 00:03:58.061 SO libspdk_bdev_malloc.so.5.0 00:03:58.061 SYMLINK libspdk_bdev_null.so 00:03:58.061 SYMLINK libspdk_bdev_error.so 00:03:58.061 CC module/bdev/split/vbdev_split.o 00:03:58.061 CC module/bdev/split/vbdev_split_rpc.o 00:03:58.061 SYMLINK libspdk_bdev_gpt.so 00:03:58.061 SYMLINK libspdk_bdev_malloc.so 00:03:58.061 CC module/bdev/raid/bdev_raid_rpc.o 00:03:58.319 CC module/bdev/raid/bdev_raid_sb.o 00:03:58.319 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:58.319 LIB libspdk_bdev_delay.a 00:03:58.319 SO libspdk_bdev_delay.so.5.0 00:03:58.319 CC module/bdev/xnvme/bdev_xnvme.o 00:03:58.319 SYMLINK libspdk_bdev_delay.so 00:03:58.319 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:58.319 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:58.319 CC module/bdev/aio/bdev_aio.o 00:03:58.576 LIB libspdk_bdev_split.a 00:03:58.576 CC module/bdev/aio/bdev_aio_rpc.o 00:03:58.576 SO libspdk_bdev_split.so.5.0 00:03:58.577 CC module/bdev/nvme/nvme_rpc.o 00:03:58.577 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:58.577 LIB libspdk_bdev_zone_block.a 00:03:58.577 SYMLINK libspdk_bdev_split.so 00:03:58.577 CC module/bdev/nvme/bdev_mdns_client.o 00:03:58.577 LIB libspdk_bdev_lvol.a 00:03:58.577 SO libspdk_bdev_zone_block.so.5.0 00:03:58.577 SO libspdk_bdev_lvol.so.5.0 00:03:58.834 SYMLINK libspdk_bdev_zone_block.so 00:03:58.834 CC module/bdev/raid/raid0.o 00:03:58.834 SYMLINK libspdk_bdev_lvol.so 00:03:58.834 LIB libspdk_bdev_xnvme.a 00:03:58.834 CC module/bdev/raid/raid1.o 00:03:58.834 LIB libspdk_bdev_aio.a 00:03:58.834 CC module/bdev/ftl/bdev_ftl.o 00:03:58.834 SO libspdk_bdev_xnvme.so.2.0 00:03:58.834 CC module/bdev/iscsi/bdev_iscsi.o 00:03:58.834 SO libspdk_bdev_aio.so.5.0 00:03:58.834 SYMLINK libspdk_bdev_aio.so 00:03:58.834 SYMLINK libspdk_bdev_xnvme.so 00:03:58.834 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:58.834 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:59.092 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:59.092 CC module/bdev/nvme/vbdev_opal.o 00:03:59.092 CC module/bdev/raid/concat.o 00:03:59.092 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:59.092 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:59.092 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:59.092 LIB libspdk_bdev_ftl.a 00:03:59.092 SO libspdk_bdev_ftl.so.5.0 00:03:59.350 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:59.350 LIB libspdk_bdev_iscsi.a 00:03:59.350 SYMLINK libspdk_bdev_ftl.so 00:03:59.350 SO libspdk_bdev_iscsi.so.5.0 00:03:59.350 LIB libspdk_bdev_raid.a 00:03:59.350 SYMLINK libspdk_bdev_iscsi.so 00:03:59.350 SO libspdk_bdev_raid.so.5.0 00:03:59.608 SYMLINK libspdk_bdev_raid.so 00:03:59.608 LIB libspdk_bdev_virtio.a 00:03:59.608 SO libspdk_bdev_virtio.so.5.0 00:03:59.866 SYMLINK libspdk_bdev_virtio.so 00:04:00.433 LIB libspdk_bdev_nvme.a 00:04:00.433 SO libspdk_bdev_nvme.so.6.0 00:04:00.433 SYMLINK libspdk_bdev_nvme.so 00:04:00.999 CC module/event/subsystems/sock/sock.o 00:04:00.999 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.999 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.999 CC module/event/subsystems/vmd/vmd.o 00:04:00.999 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.999 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.999 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.999 LIB libspdk_event_sock.a 00:04:00.999 LIB libspdk_event_vhost_blk.a 00:04:00.999 LIB libspdk_event_vmd.a 00:04:01.000 LIB libspdk_event_scheduler.a 00:04:01.000 SO libspdk_event_sock.so.4.0 00:04:01.000 LIB libspdk_event_iobuf.a 00:04:01.000 SO libspdk_event_vhost_blk.so.2.0 00:04:01.000 SO libspdk_event_scheduler.so.3.0 00:04:01.000 SO libspdk_event_vmd.so.5.0 00:04:01.000 SO libspdk_event_iobuf.so.2.0 00:04:01.258 SYMLINK libspdk_event_vhost_blk.so 00:04:01.258 SYMLINK libspdk_event_scheduler.so 00:04:01.258 SYMLINK libspdk_event_sock.so 00:04:01.258 SYMLINK libspdk_event_vmd.so 00:04:01.258 SYMLINK libspdk_event_iobuf.so 00:04:01.258 CC module/event/subsystems/accel/accel.o 00:04:01.516 LIB libspdk_event_accel.a 00:04:01.516 SO libspdk_event_accel.so.5.0 00:04:01.774 SYMLINK libspdk_event_accel.so 00:04:01.774 CC module/event/subsystems/bdev/bdev.o 00:04:02.032 LIB libspdk_event_bdev.a 00:04:02.032 SO libspdk_event_bdev.so.5.0 00:04:02.290 SYMLINK libspdk_event_bdev.so 00:04:02.290 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:02.290 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:02.290 CC module/event/subsystems/scsi/scsi.o 00:04:02.290 CC module/event/subsystems/nbd/nbd.o 00:04:02.290 CC module/event/subsystems/ublk/ublk.o 00:04:02.548 LIB libspdk_event_nbd.a 00:04:02.548 SO libspdk_event_nbd.so.5.0 00:04:02.548 LIB libspdk_event_scsi.a 00:04:02.548 LIB libspdk_event_ublk.a 00:04:02.548 SO libspdk_event_scsi.so.5.0 00:04:02.548 SO libspdk_event_ublk.so.2.0 00:04:02.548 SYMLINK libspdk_event_nbd.so 00:04:02.548 LIB libspdk_event_nvmf.a 00:04:02.806 SYMLINK libspdk_event_scsi.so 00:04:02.806 SO libspdk_event_nvmf.so.5.0 00:04:02.806 SYMLINK libspdk_event_ublk.so 00:04:02.806 SYMLINK libspdk_event_nvmf.so 00:04:02.806 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.806 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:03.065 LIB libspdk_event_vhost_scsi.a 00:04:03.065 LIB libspdk_event_iscsi.a 00:04:03.065 SO libspdk_event_iscsi.so.5.0 00:04:03.065 SO libspdk_event_vhost_scsi.so.2.0 00:04:03.065 SYMLINK libspdk_event_vhost_scsi.so 00:04:03.324 SYMLINK libspdk_event_iscsi.so 00:04:03.324 SO libspdk.so.5.0 00:04:03.324 SYMLINK libspdk.so 00:04:03.582 CXX app/trace/trace.o 00:04:03.582 CC examples/ioat/perf/perf.o 00:04:03.582 CC examples/accel/perf/accel_perf.o 00:04:03.582 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.582 CC examples/nvme/hello_world/hello_world.o 00:04:03.582 CC examples/sock/hello_world/hello_sock.o 00:04:03.582 CC examples/blob/hello_world/hello_blob.o 00:04:03.582 CC examples/nvmf/nvmf/nvmf.o 00:04:03.582 CC test/accel/dif/dif.o 00:04:03.582 CC examples/bdev/hello_world/hello_bdev.o 00:04:03.841 LINK lsvmd 00:04:04.100 LINK hello_blob 00:04:04.100 LINK hello_world 00:04:04.100 LINK hello_sock 00:04:04.100 LINK ioat_perf 00:04:04.100 LINK hello_bdev 00:04:04.100 LINK nvmf 00:04:04.100 LINK spdk_trace 00:04:04.100 CC examples/vmd/led/led.o 00:04:04.358 LINK dif 00:04:04.358 LINK accel_perf 00:04:04.358 CC examples/ioat/verify/verify.o 00:04:04.358 CC examples/nvme/reconnect/reconnect.o 00:04:04.358 CC examples/blob/cli/blobcli.o 00:04:04.358 LINK led 00:04:04.358 CC examples/util/zipf/zipf.o 00:04:04.358 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.616 CC app/trace_record/trace_record.o 00:04:04.616 CC test/app/bdev_svc/bdev_svc.o 00:04:04.616 LINK verify 00:04:04.616 CC app/nvmf_tgt/nvmf_main.o 00:04:04.616 LINK zipf 00:04:04.616 CC test/bdev/bdevio/bdevio.o 00:04:04.874 LINK bdev_svc 00:04:04.874 TEST_HEADER include/spdk/accel.h 00:04:04.875 TEST_HEADER include/spdk/accel_module.h 00:04:04.875 TEST_HEADER include/spdk/assert.h 00:04:04.875 TEST_HEADER include/spdk/barrier.h 00:04:04.875 TEST_HEADER include/spdk/base64.h 00:04:04.875 LINK nvmf_tgt 00:04:04.875 TEST_HEADER include/spdk/bdev.h 00:04:04.875 TEST_HEADER include/spdk/bdev_module.h 00:04:04.875 TEST_HEADER include/spdk/bdev_zone.h 00:04:04.875 TEST_HEADER include/spdk/bit_array.h 00:04:04.875 LINK reconnect 00:04:04.875 TEST_HEADER include/spdk/bit_pool.h 00:04:04.875 TEST_HEADER include/spdk/blob_bdev.h 00:04:04.875 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:04.875 TEST_HEADER include/spdk/blobfs.h 00:04:04.875 TEST_HEADER include/spdk/blob.h 00:04:04.875 TEST_HEADER include/spdk/conf.h 00:04:04.875 TEST_HEADER include/spdk/config.h 00:04:04.875 TEST_HEADER include/spdk/cpuset.h 00:04:04.875 TEST_HEADER include/spdk/crc16.h 00:04:04.875 TEST_HEADER include/spdk/crc32.h 00:04:04.875 CC test/blobfs/mkfs/mkfs.o 00:04:04.875 TEST_HEADER include/spdk/crc64.h 00:04:04.875 TEST_HEADER include/spdk/dif.h 00:04:04.875 TEST_HEADER include/spdk/dma.h 00:04:04.875 LINK spdk_trace_record 00:04:04.875 TEST_HEADER include/spdk/endian.h 00:04:04.875 TEST_HEADER include/spdk/env_dpdk.h 00:04:04.875 TEST_HEADER include/spdk/env.h 00:04:04.875 TEST_HEADER include/spdk/event.h 00:04:04.875 TEST_HEADER include/spdk/fd_group.h 00:04:04.875 TEST_HEADER include/spdk/fd.h 00:04:04.875 TEST_HEADER include/spdk/file.h 00:04:04.875 TEST_HEADER include/spdk/ftl.h 00:04:04.875 TEST_HEADER include/spdk/gpt_spec.h 00:04:04.875 TEST_HEADER include/spdk/hexlify.h 00:04:04.875 TEST_HEADER include/spdk/histogram_data.h 00:04:04.875 TEST_HEADER include/spdk/idxd.h 00:04:04.875 TEST_HEADER include/spdk/idxd_spec.h 00:04:04.875 TEST_HEADER include/spdk/init.h 00:04:04.875 TEST_HEADER include/spdk/ioat.h 00:04:04.875 TEST_HEADER include/spdk/ioat_spec.h 00:04:04.875 TEST_HEADER include/spdk/iscsi_spec.h 00:04:04.875 TEST_HEADER include/spdk/json.h 00:04:04.875 TEST_HEADER include/spdk/jsonrpc.h 00:04:04.875 TEST_HEADER include/spdk/likely.h 00:04:04.875 TEST_HEADER include/spdk/log.h 00:04:04.875 TEST_HEADER include/spdk/lvol.h 00:04:04.875 TEST_HEADER include/spdk/memory.h 00:04:04.875 TEST_HEADER include/spdk/mmio.h 00:04:04.875 TEST_HEADER include/spdk/nbd.h 00:04:04.875 TEST_HEADER include/spdk/notify.h 00:04:04.875 TEST_HEADER include/spdk/nvme.h 00:04:04.875 TEST_HEADER include/spdk/nvme_intel.h 00:04:04.875 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:04.875 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:04.875 TEST_HEADER include/spdk/nvme_spec.h 00:04:04.875 TEST_HEADER include/spdk/nvme_zns.h 00:04:04.875 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:04.875 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:04.875 TEST_HEADER include/spdk/nvmf.h 00:04:04.875 TEST_HEADER include/spdk/nvmf_spec.h 00:04:04.875 TEST_HEADER include/spdk/nvmf_transport.h 00:04:04.875 TEST_HEADER include/spdk/opal.h 00:04:04.875 TEST_HEADER include/spdk/opal_spec.h 00:04:04.875 TEST_HEADER include/spdk/pci_ids.h 00:04:04.875 TEST_HEADER include/spdk/pipe.h 00:04:04.875 TEST_HEADER include/spdk/queue.h 00:04:04.875 TEST_HEADER include/spdk/reduce.h 00:04:04.875 TEST_HEADER include/spdk/rpc.h 00:04:04.875 TEST_HEADER include/spdk/scheduler.h 00:04:04.875 TEST_HEADER include/spdk/scsi.h 00:04:04.875 TEST_HEADER include/spdk/scsi_spec.h 00:04:04.875 TEST_HEADER include/spdk/sock.h 00:04:04.875 TEST_HEADER include/spdk/stdinc.h 00:04:04.875 TEST_HEADER include/spdk/string.h 00:04:04.875 TEST_HEADER include/spdk/thread.h 00:04:04.875 TEST_HEADER include/spdk/trace.h 00:04:04.875 TEST_HEADER include/spdk/trace_parser.h 00:04:04.875 TEST_HEADER include/spdk/tree.h 00:04:04.875 TEST_HEADER include/spdk/ublk.h 00:04:04.875 TEST_HEADER include/spdk/util.h 00:04:04.875 TEST_HEADER include/spdk/uuid.h 00:04:04.875 TEST_HEADER include/spdk/version.h 00:04:04.875 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:04.875 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:04.875 TEST_HEADER include/spdk/vhost.h 00:04:04.875 TEST_HEADER include/spdk/vmd.h 00:04:04.875 TEST_HEADER include/spdk/xor.h 00:04:04.875 TEST_HEADER include/spdk/zipf.h 00:04:05.133 CXX test/cpp_headers/accel.o 00:04:05.133 CC test/dma/test_dma/test_dma.o 00:04:05.133 LINK mkfs 00:04:05.133 LINK blobcli 00:04:05.133 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.133 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:05.133 CC app/iscsi_tgt/iscsi_tgt.o 00:04:05.133 CXX test/cpp_headers/accel_module.o 00:04:05.133 CC test/env/mem_callbacks/mem_callbacks.o 00:04:05.133 LINK bdevio 00:04:05.397 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:05.397 CXX test/cpp_headers/assert.o 00:04:05.397 LINK iscsi_tgt 00:04:05.397 CC test/event/event_perf/event_perf.o 00:04:05.397 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:05.654 LINK test_dma 00:04:05.654 CXX test/cpp_headers/barrier.o 00:04:05.654 LINK bdevperf 00:04:05.654 LINK event_perf 00:04:05.654 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:05.654 LINK nvme_fuzz 00:04:05.654 CC app/spdk_tgt/spdk_tgt.o 00:04:05.654 CXX test/cpp_headers/base64.o 00:04:05.654 LINK nvme_manage 00:04:05.912 CC test/event/reactor/reactor.o 00:04:05.912 LINK mem_callbacks 00:04:05.912 CC app/spdk_lspci/spdk_lspci.o 00:04:05.912 CXX test/cpp_headers/bdev.o 00:04:05.912 LINK spdk_tgt 00:04:05.912 LINK reactor 00:04:05.912 LINK spdk_lspci 00:04:05.912 CC test/nvme/aer/aer.o 00:04:05.912 CC test/lvol/esnap/esnap.o 00:04:06.169 CC test/env/vtophys/vtophys.o 00:04:06.169 CC examples/nvme/arbitration/arbitration.o 00:04:06.169 LINK vhost_fuzz 00:04:06.169 CXX test/cpp_headers/bdev_module.o 00:04:06.169 CXX test/cpp_headers/bdev_zone.o 00:04:06.169 CC test/event/reactor_perf/reactor_perf.o 00:04:06.169 CC app/spdk_nvme_perf/perf.o 00:04:06.169 LINK vtophys 00:04:06.426 CXX test/cpp_headers/bit_array.o 00:04:06.426 LINK aer 00:04:06.426 LINK reactor_perf 00:04:06.426 CC test/event/app_repeat/app_repeat.o 00:04:06.426 CC test/event/scheduler/scheduler.o 00:04:06.426 CXX test/cpp_headers/bit_pool.o 00:04:06.426 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:06.682 LINK arbitration 00:04:06.682 CC app/spdk_nvme_identify/identify.o 00:04:06.683 CC test/nvme/reset/reset.o 00:04:06.683 LINK app_repeat 00:04:06.683 CXX test/cpp_headers/blob_bdev.o 00:04:06.683 LINK scheduler 00:04:06.939 LINK env_dpdk_post_init 00:04:06.939 CC examples/nvme/hotplug/hotplug.o 00:04:06.939 CC test/rpc_client/rpc_client_test.o 00:04:06.939 LINK reset 00:04:06.939 CXX test/cpp_headers/blobfs_bdev.o 00:04:07.197 CC test/env/memory/memory_ut.o 00:04:07.197 LINK rpc_client_test 00:04:07.197 CC app/spdk_nvme_discover/discovery_aer.o 00:04:07.197 LINK hotplug 00:04:07.454 CXX test/cpp_headers/blobfs.o 00:04:07.454 CC test/nvme/sgl/sgl.o 00:04:07.454 LINK spdk_nvme_perf 00:04:07.454 LINK spdk_nvme_discover 00:04:07.454 CC test/nvme/e2edp/nvme_dp.o 00:04:07.454 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:07.711 CXX test/cpp_headers/blob.o 00:04:07.711 CC test/nvme/overhead/overhead.o 00:04:07.711 CC test/nvme/err_injection/err_injection.o 00:04:07.711 LINK cmb_copy 00:04:07.711 LINK spdk_nvme_identify 00:04:07.711 LINK iscsi_fuzz 00:04:07.711 LINK sgl 00:04:07.711 LINK nvme_dp 00:04:07.969 CXX test/cpp_headers/conf.o 00:04:07.969 LINK err_injection 00:04:07.969 LINK overhead 00:04:07.969 CC examples/nvme/abort/abort.o 00:04:07.969 CC app/spdk_top/spdk_top.o 00:04:08.227 CXX test/cpp_headers/config.o 00:04:08.227 CC test/nvme/reserve/reserve.o 00:04:08.227 CC test/nvme/startup/startup.o 00:04:08.227 CXX test/cpp_headers/cpuset.o 00:04:08.227 CC test/app/histogram_perf/histogram_perf.o 00:04:08.227 CXX test/cpp_headers/crc16.o 00:04:08.227 CXX test/cpp_headers/crc32.o 00:04:08.227 LINK startup 00:04:08.484 LINK reserve 00:04:08.484 LINK memory_ut 00:04:08.484 CC test/nvme/simple_copy/simple_copy.o 00:04:08.484 LINK histogram_perf 00:04:08.484 CXX test/cpp_headers/crc64.o 00:04:08.484 CC test/nvme/connect_stress/connect_stress.o 00:04:08.742 LINK abort 00:04:08.742 CXX test/cpp_headers/dif.o 00:04:08.742 CC test/app/jsoncat/jsoncat.o 00:04:08.742 CC examples/thread/thread/thread_ex.o 00:04:08.742 LINK connect_stress 00:04:08.742 CC examples/idxd/perf/perf.o 00:04:08.742 LINK simple_copy 00:04:08.742 CC test/env/pci/pci_ut.o 00:04:08.742 CXX test/cpp_headers/dma.o 00:04:08.999 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.999 CXX test/cpp_headers/endian.o 00:04:08.999 LINK jsoncat 00:04:08.999 LINK thread 00:04:09.256 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.256 CXX test/cpp_headers/env_dpdk.o 00:04:09.256 CC test/nvme/boot_partition/boot_partition.o 00:04:09.256 LINK pmr_persistence 00:04:09.256 CC test/app/stub/stub.o 00:04:09.256 LINK idxd_perf 00:04:09.256 CXX test/cpp_headers/env.o 00:04:09.256 LINK interrupt_tgt 00:04:09.513 LINK boot_partition 00:04:09.513 LINK pci_ut 00:04:09.513 CXX test/cpp_headers/event.o 00:04:09.513 LINK spdk_top 00:04:09.513 CXX test/cpp_headers/fd_group.o 00:04:09.513 LINK stub 00:04:09.513 CC test/thread/poller_perf/poller_perf.o 00:04:09.513 CXX test/cpp_headers/fd.o 00:04:09.770 CXX test/cpp_headers/file.o 00:04:09.770 CC app/vhost/vhost.o 00:04:09.770 LINK poller_perf 00:04:09.770 CC app/spdk_dd/spdk_dd.o 00:04:09.770 CC test/nvme/compliance/nvme_compliance.o 00:04:09.770 CC test/nvme/fused_ordering/fused_ordering.o 00:04:09.770 CC app/fio/nvme/fio_plugin.o 00:04:09.770 CXX test/cpp_headers/ftl.o 00:04:09.770 CXX test/cpp_headers/gpt_spec.o 00:04:09.770 CC app/fio/bdev/fio_plugin.o 00:04:10.026 CXX test/cpp_headers/hexlify.o 00:04:10.026 LINK vhost 00:04:10.026 LINK fused_ordering 00:04:10.026 CXX test/cpp_headers/histogram_data.o 00:04:10.026 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:10.283 LINK nvme_compliance 00:04:10.283 CXX test/cpp_headers/idxd.o 00:04:10.283 CC test/nvme/fdp/fdp.o 00:04:10.283 LINK spdk_dd 00:04:10.283 CXX test/cpp_headers/idxd_spec.o 00:04:10.283 CXX test/cpp_headers/init.o 00:04:10.283 CXX test/cpp_headers/ioat.o 00:04:10.283 LINK doorbell_aers 00:04:10.540 CC test/nvme/cuse/cuse.o 00:04:10.540 CXX test/cpp_headers/ioat_spec.o 00:04:10.540 CXX test/cpp_headers/iscsi_spec.o 00:04:10.540 CXX test/cpp_headers/json.o 00:04:10.540 LINK spdk_bdev 00:04:10.540 CXX test/cpp_headers/jsonrpc.o 00:04:10.540 CXX test/cpp_headers/likely.o 00:04:10.540 CXX test/cpp_headers/log.o 00:04:10.798 CXX test/cpp_headers/lvol.o 00:04:10.798 CXX test/cpp_headers/memory.o 00:04:10.798 LINK spdk_nvme 00:04:10.798 CXX test/cpp_headers/mmio.o 00:04:10.798 LINK fdp 00:04:10.798 CXX test/cpp_headers/nbd.o 00:04:10.798 CXX test/cpp_headers/notify.o 00:04:10.798 CXX test/cpp_headers/nvme.o 00:04:10.798 CXX test/cpp_headers/nvme_intel.o 00:04:10.798 CXX test/cpp_headers/nvme_ocssd.o 00:04:10.798 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:10.798 CXX test/cpp_headers/nvme_spec.o 00:04:10.798 CXX test/cpp_headers/nvme_zns.o 00:04:11.056 CXX test/cpp_headers/nvmf_cmd.o 00:04:11.056 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:11.056 CXX test/cpp_headers/nvmf.o 00:04:11.056 CXX test/cpp_headers/nvmf_spec.o 00:04:11.056 CXX test/cpp_headers/nvmf_transport.o 00:04:11.056 CXX test/cpp_headers/opal.o 00:04:11.056 CXX test/cpp_headers/opal_spec.o 00:04:11.056 CXX test/cpp_headers/pci_ids.o 00:04:11.319 CXX test/cpp_headers/pipe.o 00:04:11.319 CXX test/cpp_headers/queue.o 00:04:11.319 CXX test/cpp_headers/reduce.o 00:04:11.319 CXX test/cpp_headers/rpc.o 00:04:11.319 CXX test/cpp_headers/scheduler.o 00:04:11.319 CXX test/cpp_headers/scsi.o 00:04:11.319 CXX test/cpp_headers/scsi_spec.o 00:04:11.319 CXX test/cpp_headers/sock.o 00:04:11.319 CXX test/cpp_headers/stdinc.o 00:04:11.319 CXX test/cpp_headers/string.o 00:04:11.319 CXX test/cpp_headers/thread.o 00:04:11.319 CXX test/cpp_headers/trace.o 00:04:11.606 CXX test/cpp_headers/trace_parser.o 00:04:11.606 CXX test/cpp_headers/tree.o 00:04:11.606 CXX test/cpp_headers/ublk.o 00:04:11.606 CXX test/cpp_headers/util.o 00:04:11.606 CXX test/cpp_headers/uuid.o 00:04:11.606 CXX test/cpp_headers/version.o 00:04:11.606 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.606 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.606 CXX test/cpp_headers/vhost.o 00:04:11.606 CXX test/cpp_headers/vmd.o 00:04:11.606 CXX test/cpp_headers/xor.o 00:04:11.606 CXX test/cpp_headers/zipf.o 00:04:11.864 LINK cuse 00:04:12.798 LINK esnap 00:04:13.364 00:04:13.364 real 1m17.048s 00:04:13.364 user 7m42.276s 00:04:13.364 sys 1m40.138s 00:04:13.364 12:25:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:13.364 ************************************ 00:04:13.364 END TEST make 00:04:13.364 ************************************ 00:04:13.364 12:25:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.622 12:25:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.622 12:25:22 -- nvmf/common.sh@7 -- # uname -s 00:04:13.622 12:25:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.622 12:25:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.622 12:25:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.622 12:25:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.622 12:25:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.622 12:25:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.622 12:25:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.622 12:25:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.622 12:25:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.622 12:25:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.622 12:25:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:04:13.622 12:25:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:04:13.622 12:25:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.622 12:25:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.622 12:25:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.622 12:25:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.622 12:25:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.622 12:25:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.622 12:25:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.622 12:25:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.622 12:25:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.622 12:25:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.622 12:25:22 -- paths/export.sh@5 -- # export PATH 00:04:13.622 12:25:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.622 12:25:22 -- nvmf/common.sh@46 -- # : 0 00:04:13.622 12:25:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:13.622 12:25:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:13.622 12:25:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:13.622 12:25:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.622 12:25:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.622 12:25:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:13.622 12:25:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:13.622 12:25:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:13.622 12:25:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.622 12:25:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.622 12:25:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.622 12:25:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.622 12:25:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.622 12:25:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.622 12:25:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.622 12:25:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.622 12:25:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.622 12:25:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.622 12:25:22 -- spdk/autotest.sh@48 -- # udevadm_pid=48459 00:04:13.622 12:25:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.622 12:25:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.622 12:25:22 -- spdk/autotest.sh@54 -- # echo 48462 00:04:13.622 12:25:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.622 12:25:22 -- spdk/autotest.sh@56 -- # echo 48463 00:04:13.622 12:25:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:13.622 12:25:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:13.622 12:25:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.622 12:25:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:13.622 12:25:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.622 12:25:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.622 12:25:22 -- spdk/autotest.sh@70 -- # create_test_list 00:04:13.622 12:25:22 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:13.622 12:25:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.622 12:25:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.622 12:25:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.622 12:25:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.622 12:25:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.622 12:25:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.622 12:25:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:13.622 12:25:22 -- common/autotest_common.sh@1440 -- # uname 00:04:13.622 12:25:22 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:13.622 12:25:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:13.622 12:25:22 -- common/autotest_common.sh@1460 -- # uname 00:04:13.622 12:25:22 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:13.622 12:25:22 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:13.622 12:25:22 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:13.622 12:25:22 -- spdk/autotest.sh@83 -- # hash lcov 00:04:13.622 12:25:22 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:13.622 12:25:22 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:13.622 --rc lcov_branch_coverage=1 00:04:13.622 --rc lcov_function_coverage=1 00:04:13.622 --rc genhtml_branch_coverage=1 00:04:13.622 --rc genhtml_function_coverage=1 00:04:13.622 --rc genhtml_legend=1 00:04:13.622 --rc geninfo_all_blocks=1 00:04:13.622 ' 00:04:13.622 12:25:22 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:13.622 --rc lcov_branch_coverage=1 00:04:13.622 --rc lcov_function_coverage=1 00:04:13.622 --rc genhtml_branch_coverage=1 00:04:13.622 --rc genhtml_function_coverage=1 00:04:13.622 --rc genhtml_legend=1 00:04:13.622 --rc geninfo_all_blocks=1 00:04:13.622 ' 00:04:13.622 12:25:22 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:13.622 --rc lcov_branch_coverage=1 00:04:13.622 --rc lcov_function_coverage=1 00:04:13.622 --rc genhtml_branch_coverage=1 00:04:13.622 --rc genhtml_function_coverage=1 00:04:13.623 --rc genhtml_legend=1 00:04:13.623 --rc geninfo_all_blocks=1 00:04:13.623 --no-external' 00:04:13.623 12:25:22 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:13.623 --rc lcov_branch_coverage=1 00:04:13.623 --rc lcov_function_coverage=1 00:04:13.623 --rc genhtml_branch_coverage=1 00:04:13.623 --rc genhtml_function_coverage=1 00:04:13.623 --rc genhtml_legend=1 00:04:13.623 --rc geninfo_all_blocks=1 00:04:13.623 --no-external' 00:04:13.623 12:25:22 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:13.881 lcov: LCOV version 1.14 00:04:13.881 12:25:22 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:23.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:23.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:23.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:23.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:23.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:23.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:41.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:41.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:41.923 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:41.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:45.208 12:25:53 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:45.208 12:25:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:45.208 12:25:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.208 12:25:53 -- spdk/autotest.sh@102 -- # rm -f 00:04:45.208 12:25:53 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.772 lsblk: /dev/nvme3c3n1: not a block device 00:04:46.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.030 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:46.030 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:46.030 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:46.030 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:46.030 12:25:54 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:46.030 12:25:54 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:46.030 12:25:54 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:46.030 12:25:54 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.030 12:25:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:46.030 12:25:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:46.030 12:25:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.030 12:25:54 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:46.030 12:25:55 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:46.030 12:25:55 -- spdk/autotest.sh@121 -- # grep -v p 00:04:46.030 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.030 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.030 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:46.030 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:46.030 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.288 No valid GPT data, bailing 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.288 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.288 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.288 1+0 records in 00:04:46.288 1+0 records out 00:04:46.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121932 s, 86.0 MB/s 00:04:46.288 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.288 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.288 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:46.288 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:46.288 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:46.288 No valid GPT data, bailing 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.288 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.288 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:46.288 1+0 records in 00:04:46.288 1+0 records out 00:04:46.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045804 s, 229 MB/s 00:04:46.288 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.288 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.288 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:04:46.288 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:46.288 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:46.288 No valid GPT data, bailing 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.288 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.288 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:46.288 1+0 records in 00:04:46.288 1+0 records out 00:04:46.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515512 s, 203 MB/s 00:04:46.288 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.288 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.288 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:04:46.288 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:46.288 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:46.288 No valid GPT data, bailing 00:04:46.288 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:46.546 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.546 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.546 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:46.546 1+0 records in 00:04:46.546 1+0 records out 00:04:46.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490254 s, 214 MB/s 00:04:46.546 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.546 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.546 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:04:46.546 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:46.546 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:46.546 No valid GPT data, bailing 00:04:46.546 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:46.546 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.546 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.546 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:46.546 1+0 records in 00:04:46.546 1+0 records out 00:04:46.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354058 s, 296 MB/s 00:04:46.546 12:25:55 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:46.546 12:25:55 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:46.546 12:25:55 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:04:46.546 12:25:55 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:46.546 12:25:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:46.546 No valid GPT data, bailing 00:04:46.546 12:25:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:46.546 12:25:55 -- scripts/common.sh@393 -- # pt= 00:04:46.546 12:25:55 -- scripts/common.sh@394 -- # return 1 00:04:46.546 12:25:55 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:46.546 1+0 records in 00:04:46.546 1+0 records out 00:04:46.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458265 s, 229 MB/s 00:04:46.546 12:25:55 -- spdk/autotest.sh@129 -- # sync 00:04:46.546 12:25:55 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.546 12:25:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.546 12:25:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.452 12:25:57 -- spdk/autotest.sh@135 -- # uname -s 00:04:48.452 12:25:57 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:48.452 12:25:57 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:48.452 12:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.452 12:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.452 12:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:48.452 ************************************ 00:04:48.452 START TEST setup.sh 00:04:48.452 ************************************ 00:04:48.452 12:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:48.452 * Looking for test storage... 00:04:48.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.452 12:25:57 -- setup/test-setup.sh@10 -- # uname -s 00:04:48.452 12:25:57 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:48.452 12:25:57 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:48.453 12:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.453 12:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.453 12:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:48.453 ************************************ 00:04:48.453 START TEST acl 00:04:48.453 ************************************ 00:04:48.453 12:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:48.453 * Looking for test storage... 00:04:48.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.453 12:25:57 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:48.453 12:25:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:48.453 12:25:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:48.453 12:25:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.453 12:25:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:48.453 12:25:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:48.453 12:25:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.453 12:25:57 -- setup/acl.sh@12 -- # devs=() 00:04:48.453 12:25:57 -- setup/acl.sh@12 -- # declare -a devs 00:04:48.453 12:25:57 -- setup/acl.sh@13 -- # drivers=() 00:04:48.453 12:25:57 -- setup/acl.sh@13 -- # declare -A drivers 00:04:48.453 12:25:57 -- setup/acl.sh@51 -- # setup reset 00:04:48.453 12:25:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.453 12:25:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.828 12:25:58 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:49.828 12:25:58 -- setup/acl.sh@16 -- # local dev driver 00:04:49.828 12:25:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.828 12:25:58 -- setup/acl.sh@15 -- # setup output status 00:04:49.828 12:25:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.828 12:25:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:50.085 Hugepages 00:04:50.085 node hugesize free / total 00:04:50.085 12:25:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.085 12:25:58 -- setup/acl.sh@19 -- # continue 00:04:50.085 12:25:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.085 00:04:50.085 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.085 12:25:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.085 12:25:58 -- setup/acl.sh@19 -- # continue 00:04:50.085 12:25:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.085 12:25:58 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:50.085 12:25:58 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:50.085 12:25:58 -- setup/acl.sh@20 -- # continue 00:04:50.085 12:25:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.085 12:25:59 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:50.085 12:25:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.085 12:25:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:50.085 12:25:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.085 12:25:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.085 12:25:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.343 12:25:59 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.343 12:25:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.343 12:25:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.343 12:25:59 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.343 12:25:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.343 12:25:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.343 12:25:59 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.343 12:25:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.343 12:25:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.343 12:25:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.343 12:25:59 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:50.343 12:25:59 -- setup/acl.sh@54 -- # run_test denied denied 00:04:50.343 12:25:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.343 12:25:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.343 12:25:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.343 ************************************ 00:04:50.343 START TEST denied 00:04:50.343 ************************************ 00:04:50.343 12:25:59 -- common/autotest_common.sh@1104 -- # denied 00:04:50.343 12:25:59 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:50.343 12:25:59 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:50.343 12:25:59 -- setup/acl.sh@38 -- # setup output config 00:04:50.343 12:25:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.343 12:25:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.275 lsblk: /dev/nvme3c3n1: not a block device 00:04:51.839 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:51.839 12:26:00 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:51.839 12:26:00 -- setup/acl.sh@28 -- # local dev driver 00:04:51.839 12:26:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.839 12:26:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:51.839 12:26:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:51.839 12:26:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.839 12:26:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.839 12:26:00 -- setup/acl.sh@41 -- # setup reset 00:04:51.839 12:26:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.839 12:26:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.402 00:04:58.402 real 0m7.350s 00:04:58.402 user 0m0.925s 00:04:58.402 sys 0m1.468s 00:04:58.402 12:26:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.402 12:26:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.402 ************************************ 00:04:58.402 END TEST denied 00:04:58.403 ************************************ 00:04:58.403 12:26:06 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:58.403 12:26:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.403 12:26:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.403 12:26:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 ************************************ 00:04:58.403 START TEST allowed 00:04:58.403 ************************************ 00:04:58.403 12:26:06 -- common/autotest_common.sh@1104 -- # allowed 00:04:58.403 12:26:06 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.403 12:26:06 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:58.403 12:26:06 -- setup/acl.sh@45 -- # setup output config 00:04:58.403 12:26:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.403 12:26:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.967 lsblk: /dev/nvme1c1n1: not a block device 00:04:59.224 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.224 12:26:08 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:59.224 12:26:08 -- setup/acl.sh@28 -- # local dev driver 00:04:59.224 12:26:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.224 12:26:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:59.224 12:26:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:59.224 12:26:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.224 12:26:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.224 12:26:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.224 12:26:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:59.224 12:26:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:59.224 12:26:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.224 12:26:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.224 12:26:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.224 12:26:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:59.225 12:26:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:59.225 12:26:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.225 12:26:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.225 12:26:08 -- setup/acl.sh@48 -- # setup reset 00:04:59.225 12:26:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.225 12:26:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.625 ************************************ 00:05:00.625 END TEST allowed 00:05:00.625 ************************************ 00:05:00.625 00:05:00.625 real 0m2.536s 00:05:00.625 user 0m1.107s 00:05:00.625 sys 0m1.428s 00:05:00.625 12:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.625 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.625 ************************************ 00:05:00.625 END TEST acl 00:05:00.625 ************************************ 00:05:00.625 00:05:00.625 real 0m11.985s 00:05:00.625 user 0m2.927s 00:05:00.625 sys 0m4.136s 00:05:00.625 12:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.625 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.625 12:26:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.625 12:26:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.625 12:26:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.625 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.625 ************************************ 00:05:00.625 START TEST hugepages 00:05:00.625 ************************************ 00:05:00.625 12:26:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.625 * Looking for test storage... 00:05:00.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.625 12:26:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:00.625 12:26:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:00.625 12:26:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:00.625 12:26:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:00.625 12:26:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:00.625 12:26:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:00.625 12:26:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:00.625 12:26:09 -- setup/common.sh@18 -- # local node= 00:05:00.625 12:26:09 -- setup/common.sh@19 -- # local var val 00:05:00.625 12:26:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.625 12:26:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.625 12:26:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.625 12:26:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.625 12:26:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.625 12:26:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.625 12:26:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5440868 kB' 'MemAvailable: 7405300 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 836904 kB' 'Inactive: 1442168 kB' 'Active(anon): 110516 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 101684 kB' 'Mapped: 48764 kB' 'Shmem: 10512 kB' 'KReclaimable: 66076 kB' 'Slab: 143688 kB' 'SReclaimable: 66076 kB' 'SUnreclaim: 77612 kB' 'KernelStack: 6368 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 325424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.625 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.625 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.626 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.626 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # continue 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.627 12:26:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.627 12:26:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.627 12:26:09 -- setup/common.sh@33 -- # echo 2048 00:05:00.627 12:26:09 -- setup/common.sh@33 -- # return 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:00.627 12:26:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:00.627 12:26:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:00.627 12:26:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:00.627 12:26:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:00.627 12:26:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:00.627 12:26:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:00.627 12:26:09 -- setup/hugepages.sh@207 -- # get_nodes 00:05:00.627 12:26:09 -- setup/hugepages.sh@27 -- # local node 00:05:00.627 12:26:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.627 12:26:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:00.627 12:26:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.627 12:26:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.627 12:26:09 -- setup/hugepages.sh@208 -- # clear_hp 00:05:00.627 12:26:09 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.627 12:26:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.627 12:26:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.627 12:26:09 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.627 12:26:09 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.627 12:26:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.627 12:26:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:00.627 12:26:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.627 12:26:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.627 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.627 ************************************ 00:05:00.627 START TEST default_setup 00:05:00.627 ************************************ 00:05:00.627 12:26:09 -- common/autotest_common.sh@1104 -- # default_setup 00:05:00.627 12:26:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.627 12:26:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.627 12:26:09 -- setup/hugepages.sh@51 -- # shift 00:05:00.627 12:26:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.627 12:26:09 -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.627 12:26:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.627 12:26:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.627 12:26:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.627 12:26:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.627 12:26:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.627 12:26:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.627 12:26:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.627 12:26:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.627 12:26:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.627 12:26:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.627 12:26:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.627 12:26:09 -- setup/hugepages.sh@73 -- # return 0 00:05:00.627 12:26:09 -- setup/hugepages.sh@137 -- # setup output 00:05:00.627 12:26:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.627 12:26:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.577 lsblk: /dev/nvme1c1n1: not a block device 00:05:01.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.835 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.835 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.835 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.096 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.096 12:26:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:02.096 12:26:10 -- setup/hugepages.sh@89 -- # local node 00:05:02.096 12:26:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.096 12:26:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.096 12:26:10 -- setup/hugepages.sh@92 -- # local surp 00:05:02.096 12:26:10 -- setup/hugepages.sh@93 -- # local resv 00:05:02.096 12:26:10 -- setup/hugepages.sh@94 -- # local anon 00:05:02.096 12:26:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.096 12:26:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.096 12:26:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.096 12:26:10 -- setup/common.sh@18 -- # local node= 00:05:02.096 12:26:10 -- setup/common.sh@19 -- # local var val 00:05:02.096 12:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.096 12:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.096 12:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.096 12:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.096 12:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.096 12:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.096 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.096 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7518376 kB' 'MemAvailable: 9482568 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 854460 kB' 'Inactive: 1442180 kB' 'Active(anon): 128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119168 kB' 'Mapped: 48824 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6368 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.097 12:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.097 12:26:10 -- setup/common.sh@33 -- # echo 0 00:05:02.097 12:26:10 -- setup/common.sh@33 -- # return 0 00:05:02.097 12:26:10 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.097 12:26:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.097 12:26:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.097 12:26:10 -- setup/common.sh@18 -- # local node= 00:05:02.097 12:26:10 -- setup/common.sh@19 -- # local var val 00:05:02.097 12:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.097 12:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.097 12:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.097 12:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.097 12:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.097 12:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.097 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7518628 kB' 'MemAvailable: 9482820 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853952 kB' 'Inactive: 1442180 kB' 'Active(anon): 127564 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118688 kB' 'Mapped: 48824 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.098 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.098 12:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.099 12:26:10 -- setup/common.sh@33 -- # echo 0 00:05:02.099 12:26:10 -- setup/common.sh@33 -- # return 0 00:05:02.099 12:26:10 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.099 12:26:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.099 12:26:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.099 12:26:10 -- setup/common.sh@18 -- # local node= 00:05:02.099 12:26:10 -- setup/common.sh@19 -- # local var val 00:05:02.099 12:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.099 12:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.099 12:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.099 12:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.099 12:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.099 12:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7518380 kB' 'MemAvailable: 9482572 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853804 kB' 'Inactive: 1442180 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118564 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6288 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.099 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.099 12:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.100 12:26:10 -- setup/common.sh@33 -- # echo 0 00:05:02.100 12:26:10 -- setup/common.sh@33 -- # return 0 00:05:02.100 12:26:10 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.100 12:26:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.100 nr_hugepages=1024 00:05:02.100 resv_hugepages=0 00:05:02.100 surplus_hugepages=0 00:05:02.100 anon_hugepages=0 00:05:02.100 12:26:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.100 12:26:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.100 12:26:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.100 12:26:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.100 12:26:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.100 12:26:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.100 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.100 12:26:11 -- setup/common.sh@18 -- # local node= 00:05:02.100 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.100 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.100 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.100 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.100 12:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.100 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.100 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7518744 kB' 'MemAvailable: 9482936 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853756 kB' 'Inactive: 1442180 kB' 'Active(anon): 127368 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6272 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.100 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.100 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.101 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.101 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.101 12:26:11 -- setup/common.sh@33 -- # echo 1024 00:05:02.101 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.101 12:26:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.101 12:26:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.101 12:26:11 -- setup/hugepages.sh@27 -- # local node 00:05:02.101 12:26:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.101 12:26:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.101 12:26:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.102 12:26:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.102 12:26:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.102 12:26:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.102 12:26:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.102 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.102 12:26:11 -- setup/common.sh@18 -- # local node=0 00:05:02.102 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.102 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.102 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.102 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.102 12:26:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.102 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.102 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7519116 kB' 'MemUsed: 4722856 kB' 'SwapCached: 0 kB' 'Active: 853524 kB' 'Inactive: 1442180 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2179060 kB' 'Mapped: 48704 kB' 'AnonPages: 118272 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.102 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.102 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.102 12:26:11 -- setup/common.sh@33 -- # echo 0 00:05:02.102 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.102 12:26:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.102 12:26:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.102 12:26:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.102 12:26:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.102 node0=1024 expecting 1024 00:05:02.102 12:26:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.102 12:26:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.102 00:05:02.102 real 0m1.524s 00:05:02.102 user 0m0.659s 00:05:02.102 sys 0m0.837s 00:05:02.103 12:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.103 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.103 ************************************ 00:05:02.103 END TEST default_setup 00:05:02.103 ************************************ 00:05:02.103 12:26:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:02.103 12:26:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.103 12:26:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.103 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.103 ************************************ 00:05:02.103 START TEST per_node_1G_alloc 00:05:02.103 ************************************ 00:05:02.103 12:26:11 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:02.103 12:26:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:02.103 12:26:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:02.103 12:26:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.103 12:26:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:02.103 12:26:11 -- setup/hugepages.sh@51 -- # shift 00:05:02.103 12:26:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:02.103 12:26:11 -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.103 12:26:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.361 12:26:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.361 12:26:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:02.361 12:26:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:02.361 12:26:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.361 12:26:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.361 12:26:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.361 12:26:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.361 12:26:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.361 12:26:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:02.361 12:26:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.361 12:26:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.361 12:26:11 -- setup/hugepages.sh@73 -- # return 0 00:05:02.361 12:26:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:02.361 12:26:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:02.361 12:26:11 -- setup/hugepages.sh@146 -- # setup output 00:05:02.361 12:26:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.361 12:26:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.618 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.618 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.618 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.619 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.881 12:26:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:02.881 12:26:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:02.881 12:26:11 -- setup/hugepages.sh@89 -- # local node 00:05:02.881 12:26:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.881 12:26:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.881 12:26:11 -- setup/hugepages.sh@92 -- # local surp 00:05:02.881 12:26:11 -- setup/hugepages.sh@93 -- # local resv 00:05:02.881 12:26:11 -- setup/hugepages.sh@94 -- # local anon 00:05:02.881 12:26:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.881 12:26:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.881 12:26:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.881 12:26:11 -- setup/common.sh@18 -- # local node= 00:05:02.881 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.881 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.881 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.881 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.881 12:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.881 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.881 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8569628 kB' 'MemAvailable: 10533840 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 854444 kB' 'Inactive: 1442200 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119220 kB' 'Mapped: 48896 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143088 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77512 kB' 'KernelStack: 6312 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.881 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.881 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.882 12:26:11 -- setup/common.sh@33 -- # echo 0 00:05:02.882 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.882 12:26:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.882 12:26:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.882 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.882 12:26:11 -- setup/common.sh@18 -- # local node= 00:05:02.882 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.882 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.882 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.882 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.882 12:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.882 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.882 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8569376 kB' 'MemAvailable: 10533588 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853840 kB' 'Inactive: 1442200 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.882 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.882 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.883 12:26:11 -- setup/common.sh@33 -- # echo 0 00:05:02.883 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.883 12:26:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.883 12:26:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.883 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.883 12:26:11 -- setup/common.sh@18 -- # local node= 00:05:02.883 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.883 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.883 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.883 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.883 12:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.883 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.883 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8569376 kB' 'MemAvailable: 10533588 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853812 kB' 'Inactive: 1442200 kB' 'Active(anon): 127424 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118568 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6272 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.883 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.883 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.884 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.884 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.885 12:26:11 -- setup/common.sh@33 -- # echo 0 00:05:02.885 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.885 12:26:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.885 nr_hugepages=512 00:05:02.885 12:26:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:02.885 resv_hugepages=0 00:05:02.885 12:26:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.885 surplus_hugepages=0 00:05:02.885 12:26:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.885 anon_hugepages=0 00:05:02.885 12:26:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.885 12:26:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.885 12:26:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:02.885 12:26:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.885 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.885 12:26:11 -- setup/common.sh@18 -- # local node= 00:05:02.885 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.885 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.885 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.885 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.885 12:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.885 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.885 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8569376 kB' 'MemAvailable: 10533588 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853852 kB' 'Inactive: 1442200 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.885 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.885 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.886 12:26:11 -- setup/common.sh@33 -- # echo 512 00:05:02.886 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.886 12:26:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.886 12:26:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.886 12:26:11 -- setup/hugepages.sh@27 -- # local node 00:05:02.886 12:26:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.886 12:26:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.886 12:26:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.886 12:26:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.886 12:26:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.886 12:26:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.886 12:26:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.886 12:26:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.886 12:26:11 -- setup/common.sh@18 -- # local node=0 00:05:02.886 12:26:11 -- setup/common.sh@19 -- # local var val 00:05:02.886 12:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.886 12:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.886 12:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.886 12:26:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.886 12:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.886 12:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8569376 kB' 'MemUsed: 3672596 kB' 'SwapCached: 0 kB' 'Active: 853852 kB' 'Inactive: 1442200 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2179060 kB' 'Mapped: 48704 kB' 'AnonPages: 118604 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65576 kB' 'Slab: 143104 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.886 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.886 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # continue 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.887 12:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.887 12:26:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.887 12:26:11 -- setup/common.sh@33 -- # echo 0 00:05:02.887 12:26:11 -- setup/common.sh@33 -- # return 0 00:05:02.887 12:26:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.887 12:26:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.887 12:26:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.887 node0=512 expecting 512 00:05:02.887 12:26:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.887 12:26:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.887 00:05:02.887 real 0m0.675s 00:05:02.887 user 0m0.329s 00:05:02.887 sys 0m0.396s 00:05:02.887 12:26:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.887 ************************************ 00:05:02.887 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.887 END TEST per_node_1G_alloc 00:05:02.887 ************************************ 00:05:02.887 12:26:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:02.887 12:26:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.887 12:26:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.887 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.887 ************************************ 00:05:02.887 START TEST even_2G_alloc 00:05:02.887 ************************************ 00:05:02.887 12:26:11 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:02.887 12:26:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:02.887 12:26:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.887 12:26:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.887 12:26:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.887 12:26:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.887 12:26:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.887 12:26:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.887 12:26:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.887 12:26:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.887 12:26:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.887 12:26:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.887 12:26:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:02.887 12:26:11 -- setup/hugepages.sh@83 -- # : 0 00:05:02.888 12:26:11 -- setup/hugepages.sh@84 -- # : 0 00:05:02.888 12:26:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.888 12:26:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:02.888 12:26:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:02.888 12:26:11 -- setup/hugepages.sh@153 -- # setup output 00:05:02.888 12:26:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.888 12:26:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.469 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.469 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.469 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.469 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.469 12:26:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.469 12:26:12 -- setup/hugepages.sh@89 -- # local node 00:05:03.469 12:26:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.469 12:26:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.469 12:26:12 -- setup/hugepages.sh@92 -- # local surp 00:05:03.469 12:26:12 -- setup/hugepages.sh@93 -- # local resv 00:05:03.469 12:26:12 -- setup/hugepages.sh@94 -- # local anon 00:05:03.469 12:26:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.469 12:26:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.469 12:26:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.469 12:26:12 -- setup/common.sh@18 -- # local node= 00:05:03.469 12:26:12 -- setup/common.sh@19 -- # local var val 00:05:03.469 12:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.469 12:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.469 12:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.469 12:26:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.469 12:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.469 12:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.469 12:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7523372 kB' 'MemAvailable: 9487584 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 854028 kB' 'Inactive: 1442200 kB' 'Active(anon): 127640 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119012 kB' 'Mapped: 48768 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143112 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6348 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.469 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.469 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.470 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.470 12:26:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.470 12:26:12 -- setup/common.sh@33 -- # echo 0 00:05:03.470 12:26:12 -- setup/common.sh@33 -- # return 0 00:05:03.470 12:26:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:03.470 12:26:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.470 12:26:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.470 12:26:12 -- setup/common.sh@18 -- # local node= 00:05:03.470 12:26:12 -- setup/common.sh@19 -- # local var val 00:05:03.470 12:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.470 12:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.470 12:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.470 12:26:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.470 12:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.470 12:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.471 12:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7524004 kB' 'MemAvailable: 9488216 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853520 kB' 'Inactive: 1442200 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143112 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6240 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.471 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.471 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.472 12:26:12 -- setup/common.sh@33 -- # echo 0 00:05:03.472 12:26:12 -- setup/common.sh@33 -- # return 0 00:05:03.472 12:26:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:03.472 12:26:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.472 12:26:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.472 12:26:12 -- setup/common.sh@18 -- # local node= 00:05:03.472 12:26:12 -- setup/common.sh@19 -- # local var val 00:05:03.472 12:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.472 12:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.472 12:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.472 12:26:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.472 12:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.472 12:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7524004 kB' 'MemAvailable: 9488216 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853680 kB' 'Inactive: 1442200 kB' 'Active(anon): 127292 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118688 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143112 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6272 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.472 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.472 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.473 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.473 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.732 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.732 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.733 12:26:12 -- setup/common.sh@33 -- # echo 0 00:05:03.733 12:26:12 -- setup/common.sh@33 -- # return 0 00:05:03.733 12:26:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:03.733 nr_hugepages=1024 00:05:03.733 12:26:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.733 resv_hugepages=0 00:05:03.733 12:26:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.733 surplus_hugepages=0 00:05:03.733 12:26:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.733 anon_hugepages=0 00:05:03.733 12:26:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.733 12:26:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.733 12:26:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.733 12:26:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.733 12:26:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.733 12:26:12 -- setup/common.sh@18 -- # local node= 00:05:03.733 12:26:12 -- setup/common.sh@19 -- # local var val 00:05:03.733 12:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.733 12:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.733 12:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.733 12:26:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.733 12:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.733 12:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7524004 kB' 'MemAvailable: 9488216 kB' 'Buffers: 2436 kB' 'Cached: 2176624 kB' 'SwapCached: 0 kB' 'Active: 853524 kB' 'Inactive: 1442200 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118572 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143112 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6272 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.733 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.733 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.734 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.734 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.734 12:26:12 -- setup/common.sh@33 -- # echo 1024 00:05:03.734 12:26:12 -- setup/common.sh@33 -- # return 0 00:05:03.734 12:26:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.734 12:26:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.734 12:26:12 -- setup/hugepages.sh@27 -- # local node 00:05:03.734 12:26:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.734 12:26:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.734 12:26:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.734 12:26:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.734 12:26:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.734 12:26:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.734 12:26:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.734 12:26:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.734 12:26:12 -- setup/common.sh@18 -- # local node=0 00:05:03.734 12:26:12 -- setup/common.sh@19 -- # local var val 00:05:03.734 12:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.734 12:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.735 12:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.735 12:26:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.735 12:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.735 12:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7524004 kB' 'MemUsed: 4717968 kB' 'SwapCached: 0 kB' 'Active: 853520 kB' 'Inactive: 1442200 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2179060 kB' 'Mapped: 48704 kB' 'AnonPages: 118280 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65576 kB' 'Slab: 143108 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # continue 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.735 12:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.735 12:26:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.735 12:26:12 -- setup/common.sh@33 -- # echo 0 00:05:03.735 12:26:12 -- setup/common.sh@33 -- # return 0 00:05:03.735 12:26:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.735 12:26:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.735 12:26:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.736 12:26:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.736 node0=1024 expecting 1024 00:05:03.736 12:26:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.736 12:26:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.736 00:05:03.736 real 0m0.713s 00:05:03.736 user 0m0.302s 00:05:03.736 sys 0m0.427s 00:05:03.736 12:26:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.736 12:26:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.736 ************************************ 00:05:03.736 END TEST even_2G_alloc 00:05:03.736 ************************************ 00:05:03.736 12:26:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.736 12:26:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.736 12:26:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.736 12:26:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.736 ************************************ 00:05:03.736 START TEST odd_alloc 00:05:03.736 ************************************ 00:05:03.736 12:26:12 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:03.736 12:26:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.736 12:26:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.736 12:26:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.736 12:26:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.736 12:26:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.736 12:26:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.736 12:26:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.736 12:26:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.736 12:26:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.736 12:26:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.736 12:26:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:03.736 12:26:12 -- setup/hugepages.sh@83 -- # : 0 00:05:03.736 12:26:12 -- setup/hugepages.sh@84 -- # : 0 00:05:03.736 12:26:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.736 12:26:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.736 12:26:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.736 12:26:12 -- setup/hugepages.sh@160 -- # setup output 00:05:03.736 12:26:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.736 12:26:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.306 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.306 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.306 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.306 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.306 12:26:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:04.306 12:26:13 -- setup/hugepages.sh@89 -- # local node 00:05:04.306 12:26:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.306 12:26:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.306 12:26:13 -- setup/hugepages.sh@92 -- # local surp 00:05:04.306 12:26:13 -- setup/hugepages.sh@93 -- # local resv 00:05:04.306 12:26:13 -- setup/hugepages.sh@94 -- # local anon 00:05:04.306 12:26:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.306 12:26:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.306 12:26:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.306 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.306 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.306 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.306 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.306 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.306 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.306 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.306 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7525972 kB' 'MemAvailable: 9490184 kB' 'Buffers: 2436 kB' 'Cached: 2176628 kB' 'SwapCached: 0 kB' 'Active: 853752 kB' 'Inactive: 1442200 kB' 'Active(anon): 127364 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 48892 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143116 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77540 kB' 'KernelStack: 6304 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.306 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.307 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.307 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.307 12:26:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:04.307 12:26:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.307 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.307 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.307 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.307 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.307 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.307 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.307 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.307 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.307 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7526248 kB' 'MemAvailable: 9490460 kB' 'Buffers: 2436 kB' 'Cached: 2176628 kB' 'SwapCached: 0 kB' 'Active: 853708 kB' 'Inactive: 1442200 kB' 'Active(anon): 127320 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 48832 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6272 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.307 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.308 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.308 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.308 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.308 12:26:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:04.308 12:26:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.308 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.308 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.308 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.308 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.308 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.308 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.308 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.308 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.308 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.308 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7526248 kB' 'MemAvailable: 9490460 kB' 'Buffers: 2436 kB' 'Cached: 2176628 kB' 'SwapCached: 0 kB' 'Active: 853752 kB' 'Inactive: 1442200 kB' 'Active(anon): 127364 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 48832 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6272 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.309 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.309 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.310 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.310 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.310 12:26:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:04.310 12:26:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:04.310 nr_hugepages=1025 00:05:04.310 resv_hugepages=0 00:05:04.310 12:26:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.310 surplus_hugepages=0 00:05:04.310 12:26:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.310 anon_hugepages=0 00:05:04.310 12:26:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.310 12:26:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.310 12:26:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:04.310 12:26:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.310 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.310 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.310 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.310 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.310 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.310 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.310 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.310 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.310 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7526248 kB' 'MemAvailable: 9490460 kB' 'Buffers: 2436 kB' 'Cached: 2176628 kB' 'SwapCached: 0 kB' 'Active: 853744 kB' 'Inactive: 1442200 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118756 kB' 'Mapped: 48832 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143096 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77520 kB' 'KernelStack: 6256 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.310 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.310 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.311 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.311 12:26:13 -- setup/common.sh@33 -- # echo 1025 00:05:04.311 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.311 12:26:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.311 12:26:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.311 12:26:13 -- setup/hugepages.sh@27 -- # local node 00:05:04.311 12:26:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.311 12:26:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:04.311 12:26:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.311 12:26:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.311 12:26:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.311 12:26:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.311 12:26:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.311 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.311 12:26:13 -- setup/common.sh@18 -- # local node=0 00:05:04.311 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.311 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.311 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.311 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.311 12:26:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.311 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.311 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.311 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7527216 kB' 'MemUsed: 4714756 kB' 'SwapCached: 0 kB' 'Active: 853564 kB' 'Inactive: 1442204 kB' 'Active(anon): 127176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2179064 kB' 'Mapped: 48704 kB' 'AnonPages: 118300 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65576 kB' 'Slab: 143088 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.312 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.312 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.312 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.312 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.312 12:26:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.312 12:26:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.312 12:26:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.312 12:26:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.312 node0=1025 expecting 1025 00:05:04.312 12:26:13 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:04.312 12:26:13 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:04.312 00:05:04.312 real 0m0.663s 00:05:04.312 user 0m0.306s 00:05:04.312 sys 0m0.402s 00:05:04.312 12:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.312 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.312 ************************************ 00:05:04.313 END TEST odd_alloc 00:05:04.313 ************************************ 00:05:04.313 12:26:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:04.313 12:26:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.313 12:26:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.313 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.313 ************************************ 00:05:04.313 START TEST custom_alloc 00:05:04.313 ************************************ 00:05:04.313 12:26:13 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:04.313 12:26:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:04.313 12:26:13 -- setup/hugepages.sh@169 -- # local node 00:05:04.313 12:26:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:04.313 12:26:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:04.313 12:26:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:04.313 12:26:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:04.313 12:26:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:04.313 12:26:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.313 12:26:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.313 12:26:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.313 12:26:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.313 12:26:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.313 12:26:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.313 12:26:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@83 -- # : 0 00:05:04.313 12:26:13 -- setup/hugepages.sh@84 -- # : 0 00:05:04.313 12:26:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:04.313 12:26:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:04.313 12:26:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:04.313 12:26:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.313 12:26:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.313 12:26:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.313 12:26:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.313 12:26:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.313 12:26:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:04.313 12:26:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:04.313 12:26:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:04.313 12:26:13 -- setup/hugepages.sh@78 -- # return 0 00:05:04.313 12:26:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:04.313 12:26:13 -- setup/hugepages.sh@187 -- # setup output 00:05:04.313 12:26:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.313 12:26:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.881 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.881 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.881 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.881 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.881 12:26:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:04.881 12:26:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:04.881 12:26:13 -- setup/hugepages.sh@89 -- # local node 00:05:04.881 12:26:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.881 12:26:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.881 12:26:13 -- setup/hugepages.sh@92 -- # local surp 00:05:04.881 12:26:13 -- setup/hugepages.sh@93 -- # local resv 00:05:04.881 12:26:13 -- setup/hugepages.sh@94 -- # local anon 00:05:04.881 12:26:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.881 12:26:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.881 12:26:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.881 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.881 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.881 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.881 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.881 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.881 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.881 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.881 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.881 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8580396 kB' 'MemAvailable: 10544616 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 854740 kB' 'Inactive: 1442208 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 119456 kB' 'Mapped: 48816 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6336 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.881 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.881 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.882 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.882 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.882 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.882 12:26:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:04.882 12:26:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.882 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.882 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.882 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.882 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.882 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.882 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.882 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.882 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.882 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.882 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8580396 kB' 'MemAvailable: 10544616 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 853912 kB' 'Inactive: 1442208 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118588 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143104 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77528 kB' 'KernelStack: 6256 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.883 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.883 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.884 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:04.884 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:04.884 12:26:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:04.884 12:26:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.884 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.884 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:04.884 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:04.884 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.884 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.884 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.884 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.884 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.884 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8580144 kB' 'MemAvailable: 10544364 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 853924 kB' 'Inactive: 1442208 kB' 'Active(anon): 127536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118664 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6288 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.884 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.884 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.145 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.145 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.146 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:05.146 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:05.146 12:26:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.146 nr_hugepages=512 00:05:05.146 12:26:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:05.146 resv_hugepages=0 00:05:05.146 12:26:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.146 surplus_hugepages=0 00:05:05.146 12:26:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.146 anon_hugepages=0 00:05:05.146 12:26:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.146 12:26:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.146 12:26:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:05.146 12:26:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.146 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.146 12:26:13 -- setup/common.sh@18 -- # local node= 00:05:05.146 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:05.146 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.146 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.146 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.146 12:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.146 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.146 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8580144 kB' 'MemAvailable: 10544364 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 853892 kB' 'Inactive: 1442208 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118660 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6288 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 346032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.146 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.146 12:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.147 12:26:13 -- setup/common.sh@33 -- # echo 512 00:05:05.147 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:05.147 12:26:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.147 12:26:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.147 12:26:13 -- setup/hugepages.sh@27 -- # local node 00:05:05.147 12:26:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.147 12:26:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.147 12:26:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.147 12:26:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.147 12:26:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.147 12:26:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.147 12:26:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.147 12:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.147 12:26:13 -- setup/common.sh@18 -- # local node=0 00:05:05.147 12:26:13 -- setup/common.sh@19 -- # local var val 00:05:05.147 12:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.147 12:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.147 12:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.147 12:26:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.147 12:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.147 12:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8580144 kB' 'MemUsed: 3661828 kB' 'SwapCached: 0 kB' 'Active: 853932 kB' 'Inactive: 1442208 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2179068 kB' 'Mapped: 48704 kB' 'AnonPages: 118660 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65576 kB' 'Slab: 143100 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.147 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.147 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.148 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.148 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.148 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.148 12:26:13 -- setup/common.sh@32 -- # continue 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.148 12:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.148 12:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.148 12:26:13 -- setup/common.sh@33 -- # echo 0 00:05:05.148 12:26:13 -- setup/common.sh@33 -- # return 0 00:05:05.148 12:26:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.148 12:26:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.148 12:26:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.148 12:26:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.148 node0=512 expecting 512 00:05:05.148 12:26:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.148 12:26:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.148 00:05:05.148 real 0m0.672s 00:05:05.148 user 0m0.320s 00:05:05.148 sys 0m0.397s 00:05:05.148 12:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.148 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.148 ************************************ 00:05:05.148 END TEST custom_alloc 00:05:05.148 ************************************ 00:05:05.148 12:26:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:05.148 12:26:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.148 12:26:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.148 12:26:14 -- common/autotest_common.sh@10 -- # set +x 00:05:05.148 ************************************ 00:05:05.148 START TEST no_shrink_alloc 00:05:05.148 ************************************ 00:05:05.148 12:26:14 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:05.148 12:26:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:05.148 12:26:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.148 12:26:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.148 12:26:14 -- setup/hugepages.sh@51 -- # shift 00:05:05.148 12:26:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.148 12:26:14 -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.148 12:26:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.148 12:26:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.148 12:26:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.148 12:26:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.148 12:26:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.148 12:26:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.148 12:26:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.148 12:26:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.148 12:26:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.148 12:26:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.148 12:26:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.148 12:26:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:05.148 12:26:14 -- setup/hugepages.sh@73 -- # return 0 00:05:05.148 12:26:14 -- setup/hugepages.sh@198 -- # setup output 00:05:05.148 12:26:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.148 12:26:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.720 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.720 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.720 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.720 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.720 12:26:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:05.720 12:26:14 -- setup/hugepages.sh@89 -- # local node 00:05:05.720 12:26:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.720 12:26:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.720 12:26:14 -- setup/hugepages.sh@92 -- # local surp 00:05:05.720 12:26:14 -- setup/hugepages.sh@93 -- # local resv 00:05:05.720 12:26:14 -- setup/hugepages.sh@94 -- # local anon 00:05:05.720 12:26:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.720 12:26:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.720 12:26:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.720 12:26:14 -- setup/common.sh@18 -- # local node= 00:05:05.720 12:26:14 -- setup/common.sh@19 -- # local var val 00:05:05.720 12:26:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.720 12:26:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.720 12:26:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.720 12:26:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.720 12:26:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.720 12:26:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7532900 kB' 'MemAvailable: 9497120 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 851200 kB' 'Inactive: 1442208 kB' 'Active(anon): 124812 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115936 kB' 'Mapped: 48284 kB' 'Shmem: 10472 kB' 'KReclaimable: 65576 kB' 'Slab: 143068 kB' 'SReclaimable: 65576 kB' 'SUnreclaim: 77492 kB' 'KernelStack: 6212 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.720 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.720 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.721 12:26:14 -- setup/common.sh@33 -- # echo 0 00:05:05.721 12:26:14 -- setup/common.sh@33 -- # return 0 00:05:05.721 12:26:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:05.721 12:26:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.721 12:26:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.721 12:26:14 -- setup/common.sh@18 -- # local node= 00:05:05.721 12:26:14 -- setup/common.sh@19 -- # local var val 00:05:05.721 12:26:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.721 12:26:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.721 12:26:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.721 12:26:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.721 12:26:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.721 12:26:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7532900 kB' 'MemAvailable: 9497116 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 850564 kB' 'Inactive: 1442208 kB' 'Active(anon): 124176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115312 kB' 'Mapped: 48044 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143040 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6148 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.721 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.721 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.722 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.722 12:26:14 -- setup/common.sh@33 -- # echo 0 00:05:05.722 12:26:14 -- setup/common.sh@33 -- # return 0 00:05:05.722 12:26:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:05.722 12:26:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.722 12:26:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.722 12:26:14 -- setup/common.sh@18 -- # local node= 00:05:05.722 12:26:14 -- setup/common.sh@19 -- # local var val 00:05:05.722 12:26:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.722 12:26:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.722 12:26:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.722 12:26:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.722 12:26:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.722 12:26:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.722 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7532900 kB' 'MemAvailable: 9497116 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 850796 kB' 'Inactive: 1442208 kB' 'Active(anon): 124408 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115516 kB' 'Mapped: 47964 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143036 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6192 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.723 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.723 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.724 12:26:14 -- setup/common.sh@33 -- # echo 0 00:05:05.724 12:26:14 -- setup/common.sh@33 -- # return 0 00:05:05.724 12:26:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.724 nr_hugepages=1024 00:05:05.724 12:26:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.724 resv_hugepages=0 00:05:05.724 12:26:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.724 surplus_hugepages=0 00:05:05.724 12:26:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.724 anon_hugepages=0 00:05:05.724 12:26:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.724 12:26:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.724 12:26:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.724 12:26:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.724 12:26:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.724 12:26:14 -- setup/common.sh@18 -- # local node= 00:05:05.724 12:26:14 -- setup/common.sh@19 -- # local var val 00:05:05.724 12:26:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.724 12:26:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.724 12:26:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.724 12:26:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.724 12:26:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.724 12:26:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7533508 kB' 'MemAvailable: 9497724 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 850744 kB' 'Inactive: 1442208 kB' 'Active(anon): 124356 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115456 kB' 'Mapped: 47964 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143036 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6160 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.724 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.724 12:26:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.725 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.725 12:26:14 -- setup/common.sh@33 -- # echo 1024 00:05:05.725 12:26:14 -- setup/common.sh@33 -- # return 0 00:05:05.725 12:26:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.725 12:26:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.725 12:26:14 -- setup/hugepages.sh@27 -- # local node 00:05:05.725 12:26:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.725 12:26:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.725 12:26:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.725 12:26:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.725 12:26:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.725 12:26:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.725 12:26:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.725 12:26:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.725 12:26:14 -- setup/common.sh@18 -- # local node=0 00:05:05.725 12:26:14 -- setup/common.sh@19 -- # local var val 00:05:05.725 12:26:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.725 12:26:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.725 12:26:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.725 12:26:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.725 12:26:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.725 12:26:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.725 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7533508 kB' 'MemUsed: 4708464 kB' 'SwapCached: 0 kB' 'Active: 850784 kB' 'Inactive: 1442208 kB' 'Active(anon): 124396 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2179068 kB' 'Mapped: 47964 kB' 'AnonPages: 115496 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65568 kB' 'Slab: 143032 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # continue 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.726 12:26:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.726 12:26:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.726 12:26:14 -- setup/common.sh@33 -- # echo 0 00:05:05.726 12:26:14 -- setup/common.sh@33 -- # return 0 00:05:05.726 12:26:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.726 12:26:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.726 12:26:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.726 12:26:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.726 node0=1024 expecting 1024 00:05:05.726 12:26:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.726 12:26:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.727 12:26:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:05.727 12:26:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:05.727 12:26:14 -- setup/hugepages.sh@202 -- # setup output 00:05:05.727 12:26:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.727 12:26:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.298 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.298 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.298 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.298 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.298 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:06.298 12:26:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:06.298 12:26:15 -- setup/hugepages.sh@89 -- # local node 00:05:06.298 12:26:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.298 12:26:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.298 12:26:15 -- setup/hugepages.sh@92 -- # local surp 00:05:06.298 12:26:15 -- setup/hugepages.sh@93 -- # local resv 00:05:06.298 12:26:15 -- setup/hugepages.sh@94 -- # local anon 00:05:06.298 12:26:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.298 12:26:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.298 12:26:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.298 12:26:15 -- setup/common.sh@18 -- # local node= 00:05:06.298 12:26:15 -- setup/common.sh@19 -- # local var val 00:05:06.298 12:26:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.298 12:26:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.298 12:26:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.298 12:26:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.298 12:26:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.298 12:26:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7535988 kB' 'MemAvailable: 9500204 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 851364 kB' 'Inactive: 1442208 kB' 'Active(anon): 124976 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116216 kB' 'Mapped: 48112 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143012 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77444 kB' 'KernelStack: 6392 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.298 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.298 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.299 12:26:15 -- setup/common.sh@33 -- # echo 0 00:05:06.299 12:26:15 -- setup/common.sh@33 -- # return 0 00:05:06.299 12:26:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.299 12:26:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.299 12:26:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.299 12:26:15 -- setup/common.sh@18 -- # local node= 00:05:06.299 12:26:15 -- setup/common.sh@19 -- # local var val 00:05:06.299 12:26:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.299 12:26:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.299 12:26:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.299 12:26:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.299 12:26:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.299 12:26:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7536240 kB' 'MemAvailable: 9500456 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 851556 kB' 'Inactive: 1442208 kB' 'Active(anon): 125168 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115912 kB' 'Mapped: 48228 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143012 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77444 kB' 'KernelStack: 6376 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.299 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.299 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.300 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.300 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.301 12:26:15 -- setup/common.sh@33 -- # echo 0 00:05:06.301 12:26:15 -- setup/common.sh@33 -- # return 0 00:05:06.301 12:26:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.301 12:26:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.301 12:26:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.301 12:26:15 -- setup/common.sh@18 -- # local node= 00:05:06.301 12:26:15 -- setup/common.sh@19 -- # local var val 00:05:06.301 12:26:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.301 12:26:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.301 12:26:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.301 12:26:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.301 12:26:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.301 12:26:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7536016 kB' 'MemAvailable: 9500232 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 850620 kB' 'Inactive: 1442208 kB' 'Active(anon): 124232 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115428 kB' 'Mapped: 48012 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143036 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6200 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.301 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.301 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.302 12:26:15 -- setup/common.sh@33 -- # echo 0 00:05:06.302 12:26:15 -- setup/common.sh@33 -- # return 0 00:05:06.302 12:26:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.302 nr_hugepages=1024 00:05:06.302 12:26:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.302 resv_hugepages=0 00:05:06.302 12:26:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.302 surplus_hugepages=0 00:05:06.302 12:26:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.302 anon_hugepages=0 00:05:06.302 12:26:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.302 12:26:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.302 12:26:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.302 12:26:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.302 12:26:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.302 12:26:15 -- setup/common.sh@18 -- # local node= 00:05:06.302 12:26:15 -- setup/common.sh@19 -- # local var val 00:05:06.302 12:26:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.302 12:26:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.302 12:26:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.302 12:26:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.302 12:26:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.302 12:26:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7535764 kB' 'MemAvailable: 9499980 kB' 'Buffers: 2436 kB' 'Cached: 2176632 kB' 'SwapCached: 0 kB' 'Active: 850884 kB' 'Inactive: 1442208 kB' 'Active(anon): 124496 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115696 kB' 'Mapped: 48012 kB' 'Shmem: 10472 kB' 'KReclaimable: 65568 kB' 'Slab: 143036 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6200 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.302 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.302 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.303 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.303 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.304 12:26:15 -- setup/common.sh@33 -- # echo 1024 00:05:06.304 12:26:15 -- setup/common.sh@33 -- # return 0 00:05:06.304 12:26:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.304 12:26:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.304 12:26:15 -- setup/hugepages.sh@27 -- # local node 00:05:06.304 12:26:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.304 12:26:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.304 12:26:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.304 12:26:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.304 12:26:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.304 12:26:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.304 12:26:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.304 12:26:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.304 12:26:15 -- setup/common.sh@18 -- # local node=0 00:05:06.304 12:26:15 -- setup/common.sh@19 -- # local var val 00:05:06.304 12:26:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.304 12:26:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.304 12:26:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.304 12:26:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.304 12:26:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.304 12:26:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7535764 kB' 'MemUsed: 4706208 kB' 'SwapCached: 0 kB' 'Active: 850760 kB' 'Inactive: 1442208 kB' 'Active(anon): 124372 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1442208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2179068 kB' 'Mapped: 47964 kB' 'AnonPages: 115504 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65568 kB' 'Slab: 143036 kB' 'SReclaimable: 65568 kB' 'SUnreclaim: 77468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.304 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.304 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # continue 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.564 12:26:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.564 12:26:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.564 12:26:15 -- setup/common.sh@33 -- # echo 0 00:05:06.564 12:26:15 -- setup/common.sh@33 -- # return 0 00:05:06.564 12:26:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.564 12:26:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.564 12:26:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.564 12:26:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.564 node0=1024 expecting 1024 00:05:06.564 12:26:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.564 12:26:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.564 00:05:06.564 real 0m1.304s 00:05:06.564 user 0m0.605s 00:05:06.564 sys 0m0.784s 00:05:06.564 12:26:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.564 12:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.564 ************************************ 00:05:06.564 END TEST no_shrink_alloc 00:05:06.564 ************************************ 00:05:06.564 12:26:15 -- setup/hugepages.sh@217 -- # clear_hp 00:05:06.564 12:26:15 -- setup/hugepages.sh@37 -- # local node hp 00:05:06.564 12:26:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:06.564 12:26:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.564 12:26:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:06.564 12:26:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.564 12:26:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:06.564 12:26:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:06.565 12:26:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:06.565 00:05:06.565 real 0m5.987s 00:05:06.565 user 0m2.678s 00:05:06.565 sys 0m3.492s 00:05:06.565 12:26:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.565 ************************************ 00:05:06.565 END TEST hugepages 00:05:06.565 12:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.565 ************************************ 00:05:06.565 12:26:15 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.565 12:26:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.565 12:26:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.565 12:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.565 ************************************ 00:05:06.565 START TEST driver 00:05:06.565 ************************************ 00:05:06.565 12:26:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.565 * Looking for test storage... 00:05:06.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.565 12:26:15 -- setup/driver.sh@68 -- # setup reset 00:05:06.565 12:26:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.565 12:26:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.127 12:26:21 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.127 12:26:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.127 12:26:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.127 12:26:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.127 ************************************ 00:05:13.127 START TEST guess_driver 00:05:13.127 ************************************ 00:05:13.127 12:26:21 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:13.127 12:26:21 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.127 12:26:21 -- setup/driver.sh@47 -- # local fail=0 00:05:13.127 12:26:21 -- setup/driver.sh@49 -- # pick_driver 00:05:13.127 12:26:21 -- setup/driver.sh@36 -- # vfio 00:05:13.127 12:26:21 -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.127 12:26:21 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.127 12:26:21 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.127 12:26:21 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.127 12:26:21 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:13.127 12:26:21 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:13.127 12:26:21 -- setup/driver.sh@32 -- # return 1 00:05:13.127 12:26:21 -- setup/driver.sh@38 -- # uio 00:05:13.127 12:26:21 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:13.127 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:13.127 12:26:21 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.127 Looking for driver=uio_pci_generic 00:05:13.127 12:26:21 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:13.127 12:26:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.127 12:26:21 -- setup/driver.sh@45 -- # setup output config 00:05:13.127 12:26:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.127 12:26:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.385 lsblk: /dev/nvme0c0n1: not a block device 00:05:13.644 12:26:22 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:13.644 12:26:22 -- setup/driver.sh@58 -- # continue 00:05:13.644 12:26:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.644 12:26:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.644 12:26:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.644 12:26:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.644 12:26:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.644 12:26:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.644 12:26:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.644 12:26:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.644 12:26:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.644 12:26:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.906 12:26:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.906 12:26:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.906 12:26:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.906 12:26:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:13.906 12:26:22 -- setup/driver.sh@65 -- # setup reset 00:05:13.906 12:26:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.906 12:26:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.533 00:05:20.533 real 0m7.236s 00:05:20.533 user 0m0.856s 00:05:20.533 sys 0m1.519s 00:05:20.533 12:26:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.533 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.533 ************************************ 00:05:20.533 END TEST guess_driver 00:05:20.533 ************************************ 00:05:20.533 00:05:20.533 real 0m13.274s 00:05:20.533 user 0m1.207s 00:05:20.533 sys 0m2.351s 00:05:20.533 12:26:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.533 ************************************ 00:05:20.533 END TEST driver 00:05:20.533 ************************************ 00:05:20.533 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.533 12:26:28 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.533 12:26:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.533 12:26:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.533 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.533 ************************************ 00:05:20.533 START TEST devices 00:05:20.533 ************************************ 00:05:20.533 12:26:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.533 * Looking for test storage... 00:05:20.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.533 12:26:28 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:20.533 12:26:28 -- setup/devices.sh@192 -- # setup reset 00:05:20.533 12:26:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.533 12:26:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.101 12:26:30 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.101 12:26:30 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:21.101 12:26:30 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:21.101 12:26:30 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:21.101 12:26:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:05:21.101 12:26:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:21.101 12:26:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:21.101 12:26:30 -- setup/devices.sh@196 -- # blocks=() 00:05:21.101 12:26:30 -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.101 12:26:30 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.101 12:26:30 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.101 12:26:30 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.101 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.101 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.101 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.101 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:05:21.101 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:21.101 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.101 12:26:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:21.101 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.360 No valid GPT data, bailing 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.360 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.360 12:26:30 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.360 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.360 12:26:30 -- setup/common.sh@80 -- # echo 1073741824 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:21.360 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.360 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:21.360 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:21.360 12:26:30 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:21.360 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:21.360 No valid GPT data, bailing 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.360 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:21.360 12:26:30 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:21.360 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:21.360 12:26:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.360 12:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.360 12:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:21.360 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.360 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:21.360 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:21.360 12:26:30 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:21.360 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:21.360 No valid GPT data, bailing 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:21.360 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.360 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:21.360 12:26:30 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:21.360 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:21.360 12:26:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.360 12:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.360 12:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:21.360 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:21.360 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:21.360 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:21.360 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:21.360 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:21.360 12:26:30 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:21.360 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:21.618 No valid GPT data, bailing 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.618 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:21.618 12:26:30 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:21.618 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:21.618 12:26:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.618 12:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.618 12:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:21.618 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.618 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:21.618 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:21.618 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:21.618 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:21.618 12:26:30 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:05:21.618 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:21.618 No valid GPT data, bailing 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.618 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:21.618 12:26:30 -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:21.618 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:21.618 12:26:30 -- setup/common.sh@80 -- # echo 6343335936 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:21.618 12:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.618 12:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:21.618 12:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.618 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:21.618 12:26:30 -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:21.618 12:26:30 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:21.618 12:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:21.618 12:26:30 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:05:21.618 12:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:21.618 No valid GPT data, bailing 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:21.618 12:26:30 -- scripts/common.sh@393 -- # pt= 00:05:21.618 12:26:30 -- scripts/common.sh@394 -- # return 1 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:21.618 12:26:30 -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:21.618 12:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:21.618 12:26:30 -- setup/common.sh@80 -- # echo 5368709120 00:05:21.618 12:26:30 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:21.618 12:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.618 12:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:21.618 12:26:30 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:21.618 12:26:30 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:05:21.618 12:26:30 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.618 12:26:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.618 12:26:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.618 12:26:30 -- common/autotest_common.sh@10 -- # set +x 00:05:21.618 ************************************ 00:05:21.618 START TEST nvme_mount 00:05:21.618 ************************************ 00:05:21.618 12:26:30 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:21.618 12:26:30 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:05:21.618 12:26:30 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:05:21.618 12:26:30 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.618 12:26:30 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.618 12:26:30 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:05:21.618 12:26:30 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:21.618 12:26:30 -- setup/common.sh@40 -- # local part_no=1 00:05:21.618 12:26:30 -- setup/common.sh@41 -- # local size=1073741824 00:05:21.618 12:26:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.618 12:26:30 -- setup/common.sh@44 -- # parts=() 00:05:21.618 12:26:30 -- setup/common.sh@44 -- # local parts 00:05:21.618 12:26:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.618 12:26:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.618 12:26:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.618 12:26:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:21.618 12:26:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.618 12:26:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:21.618 12:26:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:21.618 12:26:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:05:22.991 Creating new GPT entries in memory. 00:05:22.991 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.991 other utilities. 00:05:22.991 12:26:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.991 12:26:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.991 12:26:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.991 12:26:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.991 12:26:31 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:23.924 Creating new GPT entries in memory. 00:05:23.924 The operation has completed successfully. 00:05:23.924 12:26:32 -- setup/common.sh@57 -- # (( part++ )) 00:05:23.924 12:26:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.924 12:26:32 -- setup/common.sh@62 -- # wait 54330 00:05:23.924 12:26:32 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.924 12:26:32 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:23.924 12:26:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.924 12:26:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:23.924 12:26:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:23.924 12:26:32 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.924 12:26:32 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.924 12:26:32 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:23.924 12:26:32 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:23.925 12:26:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.925 12:26:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.925 12:26:32 -- setup/devices.sh@53 -- # local found=0 00:05:23.925 12:26:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.925 12:26:32 -- setup/devices.sh@56 -- # : 00:05:23.925 12:26:32 -- setup/devices.sh@59 -- # local pci status 00:05:23.925 12:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.925 12:26:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:23.925 12:26:32 -- setup/devices.sh@47 -- # setup output config 00:05:23.925 12:26:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.925 12:26:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.925 12:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:23.925 12:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.182 12:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:24.182 12:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 12:26:33 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:24.440 12:26:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:24.440 12:26:33 -- setup/devices.sh@63 -- # found=1 00:05:24.440 12:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 12:26:33 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:24.440 12:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 lsblk: /dev/nvme0c0n1: not a block device 00:05:24.440 12:26:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:24.440 12:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.697 12:26:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:24.697 12:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.697 12:26:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.697 12:26:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:24.697 12:26:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.697 12:26:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.697 12:26:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.697 12:26:33 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:24.697 12:26:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.697 12:26:33 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.697 12:26:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:24.697 12:26:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:24.697 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.697 12:26:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:24.697 12:26:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:24.955 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.955 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.955 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.955 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:24.955 12:26:33 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:24.955 12:26:33 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:24.955 12:26:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.955 12:26:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:24.955 12:26:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:24.955 12:26:33 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.955 12:26:33 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.955 12:26:33 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:24.955 12:26:33 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:24.955 12:26:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.955 12:26:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.955 12:26:33 -- setup/devices.sh@53 -- # local found=0 00:05:24.955 12:26:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.955 12:26:33 -- setup/devices.sh@56 -- # : 00:05:24.955 12:26:33 -- setup/devices.sh@59 -- # local pci status 00:05:24.955 12:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.955 12:26:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:24.955 12:26:33 -- setup/devices.sh@47 -- # setup output config 00:05:24.955 12:26:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.955 12:26:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.213 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.213 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.213 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.213 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.471 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.471 12:26:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:25.471 12:26:34 -- setup/devices.sh@63 -- # found=1 00:05:25.471 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.729 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.729 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.729 lsblk: /dev/nvme0c0n1: not a block device 00:05:25.729 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.729 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.987 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.987 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.987 12:26:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.987 12:26:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.987 12:26:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.987 12:26:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.987 12:26:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.987 12:26:34 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.987 12:26:34 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:25.987 12:26:34 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:25.987 12:26:34 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:25.987 12:26:34 -- setup/devices.sh@50 -- # local mount_point= 00:05:25.987 12:26:34 -- setup/devices.sh@51 -- # local test_file= 00:05:25.987 12:26:34 -- setup/devices.sh@53 -- # local found=0 00:05:25.987 12:26:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.987 12:26:34 -- setup/devices.sh@59 -- # local pci status 00:05:25.987 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.987 12:26:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:25.987 12:26:34 -- setup/devices.sh@47 -- # setup output config 00:05:25.987 12:26:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.987 12:26:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.987 12:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:25.987 12:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.244 12:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:26.245 12:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.503 12:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:26.503 12:26:35 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:26.503 12:26:35 -- setup/devices.sh@63 -- # found=1 00:05:26.503 12:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.503 12:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:26.503 12:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.761 lsblk: /dev/nvme0c0n1: not a block device 00:05:26.761 12:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:26.762 12:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.762 12:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:26.762 12:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.019 12:26:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.019 12:26:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.019 12:26:35 -- setup/devices.sh@68 -- # return 0 00:05:27.019 12:26:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:27.019 12:26:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.019 12:26:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:27.019 12:26:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:27.019 12:26:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:27.019 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.019 00:05:27.019 real 0m5.278s 00:05:27.019 user 0m1.273s 00:05:27.019 sys 0m1.751s 00:05:27.019 12:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.020 ************************************ 00:05:27.020 END TEST nvme_mount 00:05:27.020 12:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.020 ************************************ 00:05:27.020 12:26:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:27.020 12:26:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.020 12:26:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.020 12:26:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.020 ************************************ 00:05:27.020 START TEST dm_mount 00:05:27.020 ************************************ 00:05:27.020 12:26:35 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:27.020 12:26:35 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:27.020 12:26:35 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:27.020 12:26:35 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:27.020 12:26:35 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:27.020 12:26:35 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:27.020 12:26:35 -- setup/common.sh@40 -- # local part_no=2 00:05:27.020 12:26:35 -- setup/common.sh@41 -- # local size=1073741824 00:05:27.020 12:26:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.020 12:26:35 -- setup/common.sh@44 -- # parts=() 00:05:27.020 12:26:35 -- setup/common.sh@44 -- # local parts 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.020 12:26:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.020 12:26:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:27.020 12:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.020 12:26:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:27.020 12:26:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:27.020 12:26:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:27.953 Creating new GPT entries in memory. 00:05:27.953 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.953 other utilities. 00:05:27.953 12:26:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.953 12:26:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.953 12:26:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.953 12:26:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.953 12:26:36 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:28.944 Creating new GPT entries in memory. 00:05:28.944 The operation has completed successfully. 00:05:28.944 12:26:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:28.944 12:26:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.944 12:26:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.944 12:26:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.944 12:26:37 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:30.322 The operation has completed successfully. 00:05:30.322 12:26:38 -- setup/common.sh@57 -- # (( part++ )) 00:05:30.322 12:26:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.322 12:26:38 -- setup/common.sh@62 -- # wait 55060 00:05:30.322 12:26:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:30.322 12:26:38 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.322 12:26:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.322 12:26:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:30.322 12:26:39 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:30.322 12:26:39 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.322 12:26:39 -- setup/devices.sh@161 -- # break 00:05:30.322 12:26:39 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.322 12:26:39 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:30.322 12:26:39 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:30.322 12:26:39 -- setup/devices.sh@166 -- # dm=dm-0 00:05:30.322 12:26:39 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:30.322 12:26:39 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:30.322 12:26:39 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.322 12:26:39 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:30.322 12:26:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.322 12:26:39 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.322 12:26:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:30.322 12:26:39 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.322 12:26:39 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.322 12:26:39 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:30.322 12:26:39 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:30.322 12:26:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.322 12:26:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.322 12:26:39 -- setup/devices.sh@53 -- # local found=0 00:05:30.322 12:26:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.322 12:26:39 -- setup/devices.sh@56 -- # : 00:05:30.322 12:26:39 -- setup/devices.sh@59 -- # local pci status 00:05:30.322 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.322 12:26:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:30.322 12:26:39 -- setup/devices.sh@47 -- # setup output config 00:05:30.322 12:26:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.322 12:26:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.322 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.322 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.322 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.322 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.888 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.888 12:26:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.889 12:26:39 -- setup/devices.sh@63 -- # found=1 00:05:30.889 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.889 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.889 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.889 lsblk: /dev/nvme0c0n1: not a block device 00:05:30.889 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.889 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.889 12:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:30.889 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.147 12:26:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.147 12:26:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:31.147 12:26:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.147 12:26:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.147 12:26:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.147 12:26:39 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.147 12:26:39 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:31.147 12:26:39 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:31.147 12:26:39 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:31.147 12:26:39 -- setup/devices.sh@50 -- # local mount_point= 00:05:31.147 12:26:39 -- setup/devices.sh@51 -- # local test_file= 00:05:31.147 12:26:39 -- setup/devices.sh@53 -- # local found=0 00:05:31.147 12:26:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.147 12:26:39 -- setup/devices.sh@59 -- # local pci status 00:05:31.147 12:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.147 12:26:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:31.147 12:26:39 -- setup/devices.sh@47 -- # setup output config 00:05:31.147 12:26:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.147 12:26:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.147 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.147 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.404 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.404 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.663 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.663 12:26:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:31.663 12:26:40 -- setup/devices.sh@63 -- # found=1 00:05:31.663 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.663 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.663 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.663 lsblk: /dev/nvme0c0n1: not a block device 00:05:31.929 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.929 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.929 12:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:31.929 12:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.929 12:26:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.929 12:26:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.929 12:26:40 -- setup/devices.sh@68 -- # return 0 00:05:31.929 12:26:40 -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.929 12:26:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.929 12:26:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.929 12:26:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:32.196 12:26:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:32.196 12:26:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:32.196 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.196 12:26:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:32.196 12:26:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:32.196 00:05:32.196 real 0m5.071s 00:05:32.196 user 0m0.848s 00:05:32.196 sys 0m1.178s 00:05:32.196 12:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.196 12:26:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.196 ************************************ 00:05:32.196 END TEST dm_mount 00:05:32.196 ************************************ 00:05:32.196 12:26:40 -- setup/devices.sh@1 -- # cleanup 00:05:32.196 12:26:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:32.196 12:26:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.196 12:26:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:32.196 12:26:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:32.196 12:26:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:32.196 12:26:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:32.454 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.454 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.454 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.454 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:32.454 12:26:41 -- setup/devices.sh@12 -- # cleanup_dm 00:05:32.454 12:26:41 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.454 12:26:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.454 12:26:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:32.454 12:26:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:32.454 12:26:41 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:32.454 12:26:41 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:32.454 00:05:32.454 real 0m12.560s 00:05:32.454 user 0m3.097s 00:05:32.454 sys 0m3.873s 00:05:32.454 12:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.454 12:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.455 ************************************ 00:05:32.455 END TEST devices 00:05:32.455 ************************************ 00:05:32.455 00:05:32.455 real 0m44.077s 00:05:32.455 user 0m9.998s 00:05:32.455 sys 0m14.023s 00:05:32.455 12:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.455 12:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.455 ************************************ 00:05:32.455 END TEST setup.sh 00:05:32.455 ************************************ 00:05:32.455 12:26:41 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.713 Hugepages 00:05:32.713 node hugesize free / total 00:05:32.713 node0 1048576kB 0 / 0 00:05:32.713 node0 2048kB 2048 / 2048 00:05:32.713 00:05:32.713 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.713 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:32.984 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:32.984 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:32.984 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:33.243 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0c0n1 00:05:33.243 12:26:42 -- spdk/autotest.sh@141 -- # uname -s 00:05:33.243 12:26:42 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:33.243 12:26:42 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:33.243 12:26:42 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.178 lsblk: /dev/nvme0c0n1: not a block device 00:05:34.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.436 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.436 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.436 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.436 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.436 12:26:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:35.810 12:26:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:35.810 12:26:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:35.810 12:26:44 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.810 12:26:44 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:35.810 12:26:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.810 12:26:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.810 12:26:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.810 12:26:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.810 12:26:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.810 12:26:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:35.810 12:26:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:35.810 12:26:44 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.068 Waiting for block devices as requested 00:05:36.326 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.326 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.326 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.326 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.584 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:41.584 12:26:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:41.584 12:26:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:41.584 12:26:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:41.584 12:26:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.584 12:26:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:41.584 12:26:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:05:41.584 12:26:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:41.584 12:26:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:41.584 12:26:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:41.584 12:26:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:41.584 12:26:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:41.584 12:26:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:41.584 12:26:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1542 -- # continue 00:05:41.585 12:26:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:41.585 12:26:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1542 -- # continue 00:05:41.585 12:26:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:41.585 12:26:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1542 -- # continue 00:05:41.585 12:26:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:05:41.585 12:26:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:41.585 12:26:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:41.585 12:26:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:41.585 12:26:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:41.585 12:26:50 -- common/autotest_common.sh@1542 -- # continue 00:05:41.585 12:26:50 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:41.585 12:26:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.585 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:41.585 12:26:50 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:41.585 12:26:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:41.585 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:41.585 12:26:50 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.537 lsblk: /dev/nvme0c0n1: not a block device 00:05:42.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.813 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.813 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.813 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.072 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.072 12:26:51 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:43.072 12:26:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:43.072 12:26:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.072 12:26:51 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:43.072 12:26:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:43.072 12:26:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.072 12:26:51 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:43.072 12:26:51 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:43.072 12:26:51 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:43.072 12:26:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:43.072 12:26:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:43.072 12:26:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.072 12:26:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.072 12:26:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:43.072 12:26:52 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:43.072 12:26:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:43.072 12:26:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:43.072 12:26:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.072 12:26:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:43.072 12:26:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.072 12:26:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:43.072 12:26:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.072 12:26:52 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:43.072 12:26:52 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:43.072 12:26:52 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.072 12:26:52 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:43.072 12:26:52 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:43.072 12:26:52 -- common/autotest_common.sh@1578 -- # return 0 00:05:43.072 12:26:52 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:43.072 12:26:52 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:43.072 12:26:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:43.072 12:26:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:43.072 12:26:52 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:43.072 12:26:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.072 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.072 12:26:52 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.072 12:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.072 12:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.072 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.330 ************************************ 00:05:43.330 START TEST env 00:05:43.330 ************************************ 00:05:43.330 12:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.330 * Looking for test storage... 00:05:43.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.330 12:26:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.330 12:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.330 12:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.330 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.330 ************************************ 00:05:43.330 START TEST env_memory 00:05:43.330 ************************************ 00:05:43.330 12:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.330 00:05:43.330 00:05:43.330 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.331 http://cunit.sourceforge.net/ 00:05:43.331 00:05:43.331 00:05:43.331 Suite: memory 00:05:43.331 Test: alloc and free memory map ...[2024-05-15 12:26:52.234007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.331 passed 00:05:43.331 Test: mem map translation ...[2024-05-15 12:26:52.297681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.331 [2024-05-15 12:26:52.297762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.331 [2024-05-15 12:26:52.297856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.331 [2024-05-15 12:26:52.297889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.589 passed 00:05:43.589 Test: mem map registration ...[2024-05-15 12:26:52.400882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:43.589 [2024-05-15 12:26:52.400981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:43.589 passed 00:05:43.589 Test: mem map adjacent registrations ...passed 00:05:43.589 00:05:43.589 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.589 suites 1 1 n/a 0 0 00:05:43.589 tests 4 4 4 0 0 00:05:43.589 asserts 152 152 152 0 n/a 00:05:43.589 00:05:43.589 Elapsed time = 0.351 seconds 00:05:43.589 00:05:43.589 real 0m0.383s 00:05:43.589 user 0m0.356s 00:05:43.589 sys 0m0.025s 00:05:43.589 ************************************ 00:05:43.589 END TEST env_memory 00:05:43.589 ************************************ 00:05:43.589 12:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.589 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.589 12:26:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.849 12:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.849 12:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.849 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.849 ************************************ 00:05:43.849 START TEST env_vtophys 00:05:43.849 ************************************ 00:05:43.849 12:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.849 EAL: lib.eal log level changed from notice to debug 00:05:43.849 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.849 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.849 EAL: Maximum logical cores by configuration: 128 00:05:43.849 EAL: Detected CPU lcores: 10 00:05:43.849 EAL: Detected NUMA nodes: 1 00:05:43.849 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:43.849 EAL: Detected shared linkage of DPDK 00:05:43.849 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.849 EAL: Selected IOVA mode 'PA' 00:05:43.849 EAL: Probing VFIO support... 00:05:43.849 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.849 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.849 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.849 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.849 EAL: Setting up physically contiguous memory... 00:05:43.849 EAL: Setting maximum number of open files to 524288 00:05:43.849 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.849 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.849 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.849 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.849 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.849 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.849 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.849 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.849 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.849 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.849 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.849 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.849 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.849 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.849 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.849 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.849 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.849 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.849 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.849 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.849 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.849 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.849 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.849 EAL: Hugepages will be freed exactly as allocated. 00:05:43.849 EAL: No shared files mode enabled, IPC is disabled 00:05:43.849 EAL: No shared files mode enabled, IPC is disabled 00:05:43.849 EAL: TSC frequency is ~2200000 KHz 00:05:43.849 EAL: Main lcore 0 is ready (tid=7fb16ead9a40;cpuset=[0]) 00:05:43.849 EAL: Trying to obtain current memory policy. 00:05:43.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.849 EAL: Restoring previous memory policy: 0 00:05:43.849 EAL: request: mp_malloc_sync 00:05:43.849 EAL: No shared files mode enabled, IPC is disabled 00:05:43.849 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.849 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.849 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.849 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.849 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:43.849 00:05:43.849 00:05:43.849 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.849 http://cunit.sourceforge.net/ 00:05:43.849 00:05:43.849 00:05:43.849 Suite: components_suite 00:05:44.416 Test: vtophys_malloc_test ...passed 00:05:44.416 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.416 EAL: Restoring previous memory policy: 4 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.416 EAL: Trying to obtain current memory policy. 00:05:44.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.416 EAL: Restoring previous memory policy: 4 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.416 EAL: Trying to obtain current memory policy. 00:05:44.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.416 EAL: Restoring previous memory policy: 4 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.416 EAL: Trying to obtain current memory policy. 00:05:44.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.416 EAL: Restoring previous memory policy: 4 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.416 EAL: Trying to obtain current memory policy. 00:05:44.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.416 EAL: Restoring previous memory policy: 4 00:05:44.416 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.416 EAL: request: mp_malloc_sync 00:05:44.416 EAL: No shared files mode enabled, IPC is disabled 00:05:44.416 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.674 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.674 EAL: request: mp_malloc_sync 00:05:44.674 EAL: No shared files mode enabled, IPC is disabled 00:05:44.674 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.674 EAL: Trying to obtain current memory policy. 00:05:44.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.674 EAL: Restoring previous memory policy: 4 00:05:44.674 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.674 EAL: request: mp_malloc_sync 00:05:44.674 EAL: No shared files mode enabled, IPC is disabled 00:05:44.674 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.674 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.932 EAL: request: mp_malloc_sync 00:05:44.932 EAL: No shared files mode enabled, IPC is disabled 00:05:44.932 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.932 EAL: Trying to obtain current memory policy. 00:05:44.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.932 EAL: Restoring previous memory policy: 4 00:05:44.932 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.932 EAL: request: mp_malloc_sync 00:05:44.932 EAL: No shared files mode enabled, IPC is disabled 00:05:44.932 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.198 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.198 EAL: request: mp_malloc_sync 00:05:45.198 EAL: No shared files mode enabled, IPC is disabled 00:05:45.198 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.456 EAL: Trying to obtain current memory policy. 00:05:45.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.456 EAL: Restoring previous memory policy: 4 00:05:45.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.456 EAL: request: mp_malloc_sync 00:05:45.456 EAL: No shared files mode enabled, IPC is disabled 00:05:45.456 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.755 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.035 EAL: request: mp_malloc_sync 00:05:46.035 EAL: No shared files mode enabled, IPC is disabled 00:05:46.035 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.293 EAL: Trying to obtain current memory policy. 00:05:46.293 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.293 EAL: Restoring previous memory policy: 4 00:05:46.293 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.293 EAL: request: mp_malloc_sync 00:05:46.293 EAL: No shared files mode enabled, IPC is disabled 00:05:46.293 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.229 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.488 EAL: request: mp_malloc_sync 00:05:47.488 EAL: No shared files mode enabled, IPC is disabled 00:05:47.488 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.054 EAL: Trying to obtain current memory policy. 00:05:48.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.311 EAL: Restoring previous memory policy: 4 00:05:48.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.569 EAL: request: mp_malloc_sync 00:05:48.569 EAL: No shared files mode enabled, IPC is disabled 00:05:48.569 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.468 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.468 EAL: request: mp_malloc_sync 00:05:50.468 EAL: No shared files mode enabled, IPC is disabled 00:05:50.468 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.843 passed 00:05:51.843 00:05:51.843 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.843 suites 1 1 n/a 0 0 00:05:51.843 tests 2 2 2 0 0 00:05:51.843 asserts 5453 5453 5453 0 n/a 00:05:51.843 00:05:51.843 Elapsed time = 7.828 seconds 00:05:51.843 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.843 EAL: request: mp_malloc_sync 00:05:51.843 EAL: No shared files mode enabled, IPC is disabled 00:05:51.843 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.843 EAL: No shared files mode enabled, IPC is disabled 00:05:51.844 EAL: No shared files mode enabled, IPC is disabled 00:05:51.844 EAL: No shared files mode enabled, IPC is disabled 00:05:51.844 00:05:51.844 real 0m8.147s 00:05:51.844 user 0m6.974s 00:05:51.844 sys 0m1.010s 00:05:51.844 12:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.844 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:05:51.844 ************************************ 00:05:51.844 END TEST env_vtophys 00:05:51.844 ************************************ 00:05:51.844 12:27:00 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.844 12:27:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.844 12:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.844 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:05:51.844 ************************************ 00:05:51.844 START TEST env_pci 00:05:51.844 ************************************ 00:05:51.844 12:27:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.844 00:05:51.844 00:05:51.844 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.844 http://cunit.sourceforge.net/ 00:05:51.844 00:05:51.844 00:05:51.844 Suite: pci 00:05:51.844 Test: pci_hook ...[2024-05-15 12:27:00.841747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56948 has claimed it 00:05:52.109 passed 00:05:52.109 00:05:52.109 EAL: Cannot find device (10000:00:01.0) 00:05:52.109 EAL: Failed to attach device on primary process 00:05:52.109 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.109 suites 1 1 n/a 0 0 00:05:52.109 tests 1 1 1 0 0 00:05:52.109 asserts 25 25 25 0 n/a 00:05:52.109 00:05:52.109 Elapsed time = 0.008 seconds 00:05:52.109 00:05:52.109 real 0m0.082s 00:05:52.109 user 0m0.034s 00:05:52.109 sys 0m0.047s 00:05:52.109 12:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.109 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 ************************************ 00:05:52.109 END TEST env_pci 00:05:52.109 ************************************ 00:05:52.109 12:27:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.109 12:27:00 -- env/env.sh@15 -- # uname 00:05:52.109 12:27:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.109 12:27:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.109 12:27:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.109 12:27:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:52.109 12:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.109 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 ************************************ 00:05:52.109 START TEST env_dpdk_post_init 00:05:52.109 ************************************ 00:05:52.109 12:27:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.109 EAL: Detected CPU lcores: 10 00:05:52.109 EAL: Detected NUMA nodes: 1 00:05:52.109 EAL: Detected shared linkage of DPDK 00:05:52.109 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.109 EAL: Selected IOVA mode 'PA' 00:05:52.380 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:52.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:52.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:52.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:52.380 Starting DPDK initialization... 00:05:52.380 Starting SPDK post initialization... 00:05:52.380 SPDK NVMe probe 00:05:52.380 Attaching to 0000:00:06.0 00:05:52.380 Attaching to 0000:00:07.0 00:05:52.380 Attaching to 0000:00:08.0 00:05:52.380 Attaching to 0000:00:09.0 00:05:52.380 Attached to 0000:00:06.0 00:05:52.380 Attached to 0000:00:07.0 00:05:52.380 Attached to 0000:00:09.0 00:05:52.380 Attached to 0000:00:08.0 00:05:52.380 Cleaning up... 00:05:52.380 00:05:52.380 real 0m0.350s 00:05:52.380 user 0m0.125s 00:05:52.380 sys 0m0.125s 00:05:52.380 12:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.380 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.380 ************************************ 00:05:52.380 END TEST env_dpdk_post_init 00:05:52.380 ************************************ 00:05:52.380 12:27:01 -- env/env.sh@26 -- # uname 00:05:52.380 12:27:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.380 12:27:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.380 12:27:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.380 12:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.380 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.380 ************************************ 00:05:52.380 START TEST env_mem_callbacks 00:05:52.380 ************************************ 00:05:52.380 12:27:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.638 EAL: Detected CPU lcores: 10 00:05:52.638 EAL: Detected NUMA nodes: 1 00:05:52.638 EAL: Detected shared linkage of DPDK 00:05:52.638 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.638 EAL: Selected IOVA mode 'PA' 00:05:52.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.638 00:05:52.638 00:05:52.638 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.638 http://cunit.sourceforge.net/ 00:05:52.638 00:05:52.638 00:05:52.638 Suite: memory 00:05:52.638 Test: test ... 00:05:52.638 register 0x200000200000 2097152 00:05:52.638 malloc 3145728 00:05:52.638 register 0x200000400000 4194304 00:05:52.638 buf 0x2000004fffc0 len 3145728 PASSED 00:05:52.638 malloc 64 00:05:52.638 buf 0x2000004ffec0 len 64 PASSED 00:05:52.638 malloc 4194304 00:05:52.638 register 0x200000800000 6291456 00:05:52.638 buf 0x2000009fffc0 len 4194304 PASSED 00:05:52.638 free 0x2000004fffc0 3145728 00:05:52.638 free 0x2000004ffec0 64 00:05:52.638 unregister 0x200000400000 4194304 PASSED 00:05:52.638 free 0x2000009fffc0 4194304 00:05:52.638 unregister 0x200000800000 6291456 PASSED 00:05:52.638 malloc 8388608 00:05:52.638 register 0x200000400000 10485760 00:05:52.638 buf 0x2000005fffc0 len 8388608 PASSED 00:05:52.638 free 0x2000005fffc0 8388608 00:05:52.638 unregister 0x200000400000 10485760 PASSED 00:05:52.638 passed 00:05:52.638 00:05:52.638 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.638 suites 1 1 n/a 0 0 00:05:52.638 tests 1 1 1 0 0 00:05:52.638 asserts 15 15 15 0 n/a 00:05:52.638 00:05:52.638 Elapsed time = 0.062 seconds 00:05:52.638 00:05:52.638 real 0m0.267s 00:05:52.638 user 0m0.097s 00:05:52.638 sys 0m0.068s 00:05:52.638 12:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.638 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.638 ************************************ 00:05:52.638 END TEST env_mem_callbacks 00:05:52.638 ************************************ 00:05:52.896 00:05:52.896 real 0m9.566s 00:05:52.896 user 0m7.695s 00:05:52.896 sys 0m1.481s 00:05:52.896 12:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.896 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.896 ************************************ 00:05:52.896 END TEST env 00:05:52.896 ************************************ 00:05:52.896 12:27:01 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.896 12:27:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.896 12:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.896 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.896 ************************************ 00:05:52.896 START TEST rpc 00:05:52.896 ************************************ 00:05:52.896 12:27:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.896 * Looking for test storage... 00:05:52.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.896 12:27:01 -- rpc/rpc.sh@65 -- # spdk_pid=57061 00:05:52.896 12:27:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.896 12:27:01 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:52.896 12:27:01 -- rpc/rpc.sh@67 -- # waitforlisten 57061 00:05:52.896 12:27:01 -- common/autotest_common.sh@819 -- # '[' -z 57061 ']' 00:05:52.896 12:27:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.896 12:27:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.896 12:27:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.896 12:27:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.896 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.896 [2024-05-15 12:27:01.899598] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.896 [2024-05-15 12:27:01.899775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57061 ] 00:05:53.154 [2024-05-15 12:27:02.078043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.413 [2024-05-15 12:27:02.384372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.413 [2024-05-15 12:27:02.384675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.413 [2024-05-15 12:27:02.384703] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57061' to capture a snapshot of events at runtime. 00:05:53.413 [2024-05-15 12:27:02.384718] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57061 for offline analysis/debug. 00:05:53.413 [2024-05-15 12:27:02.384763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.788 12:27:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.788 12:27:03 -- common/autotest_common.sh@852 -- # return 0 00:05:54.788 12:27:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.788 12:27:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.788 12:27:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.788 12:27:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.788 12:27:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.788 12:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.788 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.788 ************************************ 00:05:54.788 START TEST rpc_integrity 00:05:54.788 ************************************ 00:05:54.788 12:27:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:54.788 12:27:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.788 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.788 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.788 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.788 12:27:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.788 12:27:03 -- rpc/rpc.sh@13 -- # jq length 00:05:54.788 12:27:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.788 12:27:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.788 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.788 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.788 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.788 12:27:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.788 12:27:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.788 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.788 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.788 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.788 12:27:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.788 { 00:05:54.788 "name": "Malloc0", 00:05:54.788 "aliases": [ 00:05:54.788 "5ed51e7e-0150-44fc-9f2f-76b6d0a2d32c" 00:05:54.788 ], 00:05:54.788 "product_name": "Malloc disk", 00:05:54.788 "block_size": 512, 00:05:54.788 "num_blocks": 16384, 00:05:54.788 "uuid": "5ed51e7e-0150-44fc-9f2f-76b6d0a2d32c", 00:05:54.788 "assigned_rate_limits": { 00:05:54.788 "rw_ios_per_sec": 0, 00:05:54.788 "rw_mbytes_per_sec": 0, 00:05:54.788 "r_mbytes_per_sec": 0, 00:05:54.788 "w_mbytes_per_sec": 0 00:05:54.788 }, 00:05:54.788 "claimed": false, 00:05:54.788 "zoned": false, 00:05:54.788 "supported_io_types": { 00:05:54.788 "read": true, 00:05:54.788 "write": true, 00:05:54.788 "unmap": true, 00:05:54.788 "write_zeroes": true, 00:05:54.788 "flush": true, 00:05:54.788 "reset": true, 00:05:54.788 "compare": false, 00:05:54.788 "compare_and_write": false, 00:05:54.788 "abort": true, 00:05:54.789 "nvme_admin": false, 00:05:54.789 "nvme_io": false 00:05:54.789 }, 00:05:54.789 "memory_domains": [ 00:05:54.789 { 00:05:54.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.789 "dma_device_type": 2 00:05:54.789 } 00:05:54.789 ], 00:05:54.789 "driver_specific": {} 00:05:54.789 } 00:05:54.789 ]' 00:05:54.789 12:27:03 -- rpc/rpc.sh@17 -- # jq length 00:05:54.789 12:27:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.789 12:27:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.789 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.789 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.789 [2024-05-15 12:27:03.718830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.789 [2024-05-15 12:27:03.718980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.789 [2024-05-15 12:27:03.719014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:54.789 [2024-05-15 12:27:03.719033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.789 [2024-05-15 12:27:03.722004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.789 [2024-05-15 12:27:03.722058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.789 Passthru0 00:05:54.789 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.789 12:27:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.789 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.789 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.789 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.789 12:27:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.789 { 00:05:54.789 "name": "Malloc0", 00:05:54.789 "aliases": [ 00:05:54.789 "5ed51e7e-0150-44fc-9f2f-76b6d0a2d32c" 00:05:54.789 ], 00:05:54.789 "product_name": "Malloc disk", 00:05:54.789 "block_size": 512, 00:05:54.789 "num_blocks": 16384, 00:05:54.789 "uuid": "5ed51e7e-0150-44fc-9f2f-76b6d0a2d32c", 00:05:54.789 "assigned_rate_limits": { 00:05:54.789 "rw_ios_per_sec": 0, 00:05:54.789 "rw_mbytes_per_sec": 0, 00:05:54.789 "r_mbytes_per_sec": 0, 00:05:54.789 "w_mbytes_per_sec": 0 00:05:54.789 }, 00:05:54.789 "claimed": true, 00:05:54.789 "claim_type": "exclusive_write", 00:05:54.789 "zoned": false, 00:05:54.789 "supported_io_types": { 00:05:54.789 "read": true, 00:05:54.789 "write": true, 00:05:54.789 "unmap": true, 00:05:54.789 "write_zeroes": true, 00:05:54.789 "flush": true, 00:05:54.789 "reset": true, 00:05:54.789 "compare": false, 00:05:54.789 "compare_and_write": false, 00:05:54.789 "abort": true, 00:05:54.789 "nvme_admin": false, 00:05:54.789 "nvme_io": false 00:05:54.789 }, 00:05:54.789 "memory_domains": [ 00:05:54.789 { 00:05:54.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.789 "dma_device_type": 2 00:05:54.789 } 00:05:54.789 ], 00:05:54.789 "driver_specific": {} 00:05:54.789 }, 00:05:54.789 { 00:05:54.789 "name": "Passthru0", 00:05:54.789 "aliases": [ 00:05:54.789 "7b1e66ca-f771-5059-b168-46727aa2e67c" 00:05:54.789 ], 00:05:54.789 "product_name": "passthru", 00:05:54.789 "block_size": 512, 00:05:54.789 "num_blocks": 16384, 00:05:54.789 "uuid": "7b1e66ca-f771-5059-b168-46727aa2e67c", 00:05:54.789 "assigned_rate_limits": { 00:05:54.789 "rw_ios_per_sec": 0, 00:05:54.789 "rw_mbytes_per_sec": 0, 00:05:54.789 "r_mbytes_per_sec": 0, 00:05:54.789 "w_mbytes_per_sec": 0 00:05:54.789 }, 00:05:54.789 "claimed": false, 00:05:54.789 "zoned": false, 00:05:54.789 "supported_io_types": { 00:05:54.789 "read": true, 00:05:54.789 "write": true, 00:05:54.789 "unmap": true, 00:05:54.789 "write_zeroes": true, 00:05:54.789 "flush": true, 00:05:54.789 "reset": true, 00:05:54.789 "compare": false, 00:05:54.789 "compare_and_write": false, 00:05:54.789 "abort": true, 00:05:54.789 "nvme_admin": false, 00:05:54.789 "nvme_io": false 00:05:54.789 }, 00:05:54.789 "memory_domains": [ 00:05:54.789 { 00:05:54.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.789 "dma_device_type": 2 00:05:54.789 } 00:05:54.789 ], 00:05:54.789 "driver_specific": { 00:05:54.789 "passthru": { 00:05:54.789 "name": "Passthru0", 00:05:54.789 "base_bdev_name": "Malloc0" 00:05:54.789 } 00:05:54.789 } 00:05:54.789 } 00:05:54.789 ]' 00:05:54.789 12:27:03 -- rpc/rpc.sh@21 -- # jq length 00:05:55.047 12:27:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.047 12:27:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.047 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.047 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.047 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.047 12:27:03 -- rpc/rpc.sh@26 -- # jq length 00:05:55.047 12:27:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.047 00:05:55.047 real 0m0.349s 00:05:55.047 user 0m0.218s 00:05:55.047 sys 0m0.036s 00:05:55.047 12:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 ************************************ 00:05:55.047 END TEST rpc_integrity 00:05:55.047 ************************************ 00:05:55.047 12:27:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.047 12:27:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.047 12:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 ************************************ 00:05:55.047 START TEST rpc_plugins 00:05:55.047 ************************************ 00:05:55.047 12:27:03 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:55.047 12:27:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.047 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.047 12:27:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.047 12:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.047 { 00:05:55.047 "name": "Malloc1", 00:05:55.047 "aliases": [ 00:05:55.047 "2021f99f-f792-4b22-addf-42c21893c3ae" 00:05:55.047 ], 00:05:55.047 "product_name": "Malloc disk", 00:05:55.047 "block_size": 4096, 00:05:55.047 "num_blocks": 256, 00:05:55.047 "uuid": "2021f99f-f792-4b22-addf-42c21893c3ae", 00:05:55.047 "assigned_rate_limits": { 00:05:55.047 "rw_ios_per_sec": 0, 00:05:55.047 "rw_mbytes_per_sec": 0, 00:05:55.047 "r_mbytes_per_sec": 0, 00:05:55.047 "w_mbytes_per_sec": 0 00:05:55.047 }, 00:05:55.047 "claimed": false, 00:05:55.047 "zoned": false, 00:05:55.047 "supported_io_types": { 00:05:55.047 "read": true, 00:05:55.047 "write": true, 00:05:55.047 "unmap": true, 00:05:55.047 "write_zeroes": true, 00:05:55.047 "flush": true, 00:05:55.047 "reset": true, 00:05:55.047 "compare": false, 00:05:55.047 "compare_and_write": false, 00:05:55.047 "abort": true, 00:05:55.047 "nvme_admin": false, 00:05:55.047 "nvme_io": false 00:05:55.047 }, 00:05:55.047 "memory_domains": [ 00:05:55.047 { 00:05:55.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.047 "dma_device_type": 2 00:05:55.047 } 00:05:55.047 ], 00:05:55.047 "driver_specific": {} 00:05:55.047 } 00:05:55.047 ]' 00:05:55.047 12:27:03 -- rpc/rpc.sh@32 -- # jq length 00:05:55.047 12:27:04 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.047 12:27:04 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.047 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.047 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.047 12:27:04 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.047 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.047 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.306 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.306 12:27:04 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.306 12:27:04 -- rpc/rpc.sh@36 -- # jq length 00:05:55.306 12:27:04 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.306 00:05:55.306 real 0m0.162s 00:05:55.306 user 0m0.105s 00:05:55.306 sys 0m0.018s 00:05:55.306 12:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.306 ************************************ 00:05:55.306 END TEST rpc_plugins 00:05:55.306 ************************************ 00:05:55.306 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.306 12:27:04 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.306 12:27:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.306 12:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.306 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.306 ************************************ 00:05:55.306 START TEST rpc_trace_cmd_test 00:05:55.306 ************************************ 00:05:55.306 12:27:04 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:55.306 12:27:04 -- rpc/rpc.sh@40 -- # local info 00:05:55.306 12:27:04 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.306 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.306 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.306 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.306 12:27:04 -- rpc/rpc.sh@42 -- # info='{ 00:05:55.306 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57061", 00:05:55.306 "tpoint_group_mask": "0x8", 00:05:55.306 "iscsi_conn": { 00:05:55.306 "mask": "0x2", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "scsi": { 00:05:55.306 "mask": "0x4", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "bdev": { 00:05:55.306 "mask": "0x8", 00:05:55.306 "tpoint_mask": "0xffffffffffffffff" 00:05:55.306 }, 00:05:55.306 "nvmf_rdma": { 00:05:55.306 "mask": "0x10", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "nvmf_tcp": { 00:05:55.306 "mask": "0x20", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "ftl": { 00:05:55.306 "mask": "0x40", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "blobfs": { 00:05:55.306 "mask": "0x80", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "dsa": { 00:05:55.306 "mask": "0x200", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "thread": { 00:05:55.306 "mask": "0x400", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "nvme_pcie": { 00:05:55.306 "mask": "0x800", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "iaa": { 00:05:55.306 "mask": "0x1000", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "nvme_tcp": { 00:05:55.306 "mask": "0x2000", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 }, 00:05:55.306 "bdev_nvme": { 00:05:55.306 "mask": "0x4000", 00:05:55.306 "tpoint_mask": "0x0" 00:05:55.306 } 00:05:55.306 }' 00:05:55.306 12:27:04 -- rpc/rpc.sh@43 -- # jq length 00:05:55.306 12:27:04 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:55.306 12:27:04 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.306 12:27:04 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.306 12:27:04 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.564 12:27:04 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.564 12:27:04 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.564 12:27:04 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.564 12:27:04 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.564 12:27:04 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.564 00:05:55.564 real 0m0.284s 00:05:55.564 user 0m0.247s 00:05:55.564 sys 0m0.026s 00:05:55.564 12:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.564 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.564 ************************************ 00:05:55.564 END TEST rpc_trace_cmd_test 00:05:55.564 ************************************ 00:05:55.564 12:27:04 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.564 12:27:04 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.564 12:27:04 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.564 12:27:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.564 12:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.564 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.564 ************************************ 00:05:55.564 START TEST rpc_daemon_integrity 00:05:55.564 ************************************ 00:05:55.565 12:27:04 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:55.565 12:27:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.565 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.565 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.565 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.565 12:27:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.565 12:27:04 -- rpc/rpc.sh@13 -- # jq length 00:05:55.565 12:27:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.565 12:27:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.565 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.565 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.823 12:27:04 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.823 { 00:05:55.823 "name": "Malloc2", 00:05:55.823 "aliases": [ 00:05:55.823 "4a82ccbe-9e1f-4338-af17-466f65e9df35" 00:05:55.823 ], 00:05:55.823 "product_name": "Malloc disk", 00:05:55.823 "block_size": 512, 00:05:55.823 "num_blocks": 16384, 00:05:55.823 "uuid": "4a82ccbe-9e1f-4338-af17-466f65e9df35", 00:05:55.823 "assigned_rate_limits": { 00:05:55.823 "rw_ios_per_sec": 0, 00:05:55.823 "rw_mbytes_per_sec": 0, 00:05:55.823 "r_mbytes_per_sec": 0, 00:05:55.823 "w_mbytes_per_sec": 0 00:05:55.823 }, 00:05:55.823 "claimed": false, 00:05:55.823 "zoned": false, 00:05:55.823 "supported_io_types": { 00:05:55.823 "read": true, 00:05:55.823 "write": true, 00:05:55.823 "unmap": true, 00:05:55.823 "write_zeroes": true, 00:05:55.823 "flush": true, 00:05:55.823 "reset": true, 00:05:55.823 "compare": false, 00:05:55.823 "compare_and_write": false, 00:05:55.823 "abort": true, 00:05:55.823 "nvme_admin": false, 00:05:55.823 "nvme_io": false 00:05:55.823 }, 00:05:55.823 "memory_domains": [ 00:05:55.823 { 00:05:55.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.823 "dma_device_type": 2 00:05:55.823 } 00:05:55.823 ], 00:05:55.823 "driver_specific": {} 00:05:55.823 } 00:05:55.823 ]' 00:05:55.823 12:27:04 -- rpc/rpc.sh@17 -- # jq length 00:05:55.823 12:27:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.823 12:27:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 [2024-05-15 12:27:04.662879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.823 [2024-05-15 12:27:04.662994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.823 [2024-05-15 12:27:04.663027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:55.823 [2024-05-15 12:27:04.663045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.823 [2024-05-15 12:27:04.666164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.823 [2024-05-15 12:27:04.666236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.823 Passthru0 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.823 { 00:05:55.823 "name": "Malloc2", 00:05:55.823 "aliases": [ 00:05:55.823 "4a82ccbe-9e1f-4338-af17-466f65e9df35" 00:05:55.823 ], 00:05:55.823 "product_name": "Malloc disk", 00:05:55.823 "block_size": 512, 00:05:55.823 "num_blocks": 16384, 00:05:55.823 "uuid": "4a82ccbe-9e1f-4338-af17-466f65e9df35", 00:05:55.823 "assigned_rate_limits": { 00:05:55.823 "rw_ios_per_sec": 0, 00:05:55.823 "rw_mbytes_per_sec": 0, 00:05:55.823 "r_mbytes_per_sec": 0, 00:05:55.823 "w_mbytes_per_sec": 0 00:05:55.823 }, 00:05:55.823 "claimed": true, 00:05:55.823 "claim_type": "exclusive_write", 00:05:55.823 "zoned": false, 00:05:55.823 "supported_io_types": { 00:05:55.823 "read": true, 00:05:55.823 "write": true, 00:05:55.823 "unmap": true, 00:05:55.823 "write_zeroes": true, 00:05:55.823 "flush": true, 00:05:55.823 "reset": true, 00:05:55.823 "compare": false, 00:05:55.823 "compare_and_write": false, 00:05:55.823 "abort": true, 00:05:55.823 "nvme_admin": false, 00:05:55.823 "nvme_io": false 00:05:55.823 }, 00:05:55.823 "memory_domains": [ 00:05:55.823 { 00:05:55.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.823 "dma_device_type": 2 00:05:55.823 } 00:05:55.823 ], 00:05:55.823 "driver_specific": {} 00:05:55.823 }, 00:05:55.823 { 00:05:55.823 "name": "Passthru0", 00:05:55.823 "aliases": [ 00:05:55.823 "293e6ad7-2e00-580f-8f65-40e22a7a18e3" 00:05:55.823 ], 00:05:55.823 "product_name": "passthru", 00:05:55.823 "block_size": 512, 00:05:55.823 "num_blocks": 16384, 00:05:55.823 "uuid": "293e6ad7-2e00-580f-8f65-40e22a7a18e3", 00:05:55.823 "assigned_rate_limits": { 00:05:55.823 "rw_ios_per_sec": 0, 00:05:55.823 "rw_mbytes_per_sec": 0, 00:05:55.823 "r_mbytes_per_sec": 0, 00:05:55.823 "w_mbytes_per_sec": 0 00:05:55.823 }, 00:05:55.823 "claimed": false, 00:05:55.823 "zoned": false, 00:05:55.823 "supported_io_types": { 00:05:55.823 "read": true, 00:05:55.823 "write": true, 00:05:55.823 "unmap": true, 00:05:55.823 "write_zeroes": true, 00:05:55.823 "flush": true, 00:05:55.823 "reset": true, 00:05:55.823 "compare": false, 00:05:55.823 "compare_and_write": false, 00:05:55.823 "abort": true, 00:05:55.823 "nvme_admin": false, 00:05:55.823 "nvme_io": false 00:05:55.823 }, 00:05:55.823 "memory_domains": [ 00:05:55.823 { 00:05:55.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.823 "dma_device_type": 2 00:05:55.823 } 00:05:55.823 ], 00:05:55.823 "driver_specific": { 00:05:55.823 "passthru": { 00:05:55.823 "name": "Passthru0", 00:05:55.823 "base_bdev_name": "Malloc2" 00:05:55.823 } 00:05:55.823 } 00:05:55.823 } 00:05:55.823 ]' 00:05:55.823 12:27:04 -- rpc/rpc.sh@21 -- # jq length 00:05:55.823 12:27:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.823 12:27:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.823 12:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.823 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.823 12:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.823 12:27:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.823 12:27:04 -- rpc/rpc.sh@26 -- # jq length 00:05:56.081 12:27:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.081 00:05:56.081 real 0m0.338s 00:05:56.081 user 0m0.199s 00:05:56.081 sys 0m0.038s 00:05:56.081 12:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.081 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.081 ************************************ 00:05:56.081 END TEST rpc_daemon_integrity 00:05:56.081 ************************************ 00:05:56.081 12:27:04 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.081 12:27:04 -- rpc/rpc.sh@84 -- # killprocess 57061 00:05:56.081 12:27:04 -- common/autotest_common.sh@926 -- # '[' -z 57061 ']' 00:05:56.081 12:27:04 -- common/autotest_common.sh@930 -- # kill -0 57061 00:05:56.081 12:27:04 -- common/autotest_common.sh@931 -- # uname 00:05:56.081 12:27:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.081 12:27:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57061 00:05:56.081 12:27:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.081 12:27:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.081 killing process with pid 57061 00:05:56.081 12:27:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57061' 00:05:56.081 12:27:04 -- common/autotest_common.sh@945 -- # kill 57061 00:05:56.081 12:27:04 -- common/autotest_common.sh@950 -- # wait 57061 00:05:58.610 00:05:58.610 real 0m5.535s 00:05:58.610 user 0m6.329s 00:05:58.610 sys 0m0.906s 00:05:58.610 12:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 END TEST rpc 00:05:58.610 ************************************ 00:05:58.610 12:27:07 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:58.610 12:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.610 12:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 START TEST rpc_client 00:05:58.610 ************************************ 00:05:58.610 12:27:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:58.610 * Looking for test storage... 00:05:58.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:58.610 12:27:07 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:58.610 OK 00:05:58.610 12:27:07 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.610 00:05:58.610 real 0m0.148s 00:05:58.610 user 0m0.067s 00:05:58.610 sys 0m0.087s 00:05:58.610 12:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 END TEST rpc_client 00:05:58.610 ************************************ 00:05:58.610 12:27:07 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:58.610 12:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.610 12:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 START TEST json_config 00:05:58.610 ************************************ 00:05:58.610 12:27:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:58.610 12:27:07 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.610 12:27:07 -- nvmf/common.sh@7 -- # uname -s 00:05:58.610 12:27:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.610 12:27:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.610 12:27:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.610 12:27:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.610 12:27:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.610 12:27:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.610 12:27:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.610 12:27:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.610 12:27:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.610 12:27:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.610 12:27:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:05:58.610 12:27:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:05:58.610 12:27:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.610 12:27:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.610 12:27:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.610 12:27:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.610 12:27:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.610 12:27:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.610 12:27:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.610 12:27:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.610 12:27:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.610 12:27:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.610 12:27:07 -- paths/export.sh@5 -- # export PATH 00:05:58.610 12:27:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.610 12:27:07 -- nvmf/common.sh@46 -- # : 0 00:05:58.610 12:27:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:58.610 12:27:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:58.610 12:27:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:58.610 12:27:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.610 12:27:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.610 12:27:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:58.610 12:27:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:58.610 12:27:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:58.610 12:27:07 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:58.610 12:27:07 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:58.610 12:27:07 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:58.610 12:27:07 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.610 WARNING: No tests are enabled so not running JSON configuration tests 00:05:58.610 12:27:07 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:58.610 12:27:07 -- json_config/json_config.sh@27 -- # exit 0 00:05:58.610 00:05:58.610 real 0m0.067s 00:05:58.610 user 0m0.028s 00:05:58.610 sys 0m0.039s 00:05:58.610 12:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.610 ************************************ 00:05:58.610 END TEST json_config 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 12:27:07 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.610 12:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.610 12:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.610 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.611 START TEST json_config_extra_key 00:05:58.611 ************************************ 00:05:58.611 12:27:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.869 12:27:07 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.869 12:27:07 -- nvmf/common.sh@7 -- # uname -s 00:05:58.869 12:27:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.869 12:27:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.869 12:27:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.869 12:27:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.869 12:27:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.869 12:27:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.869 12:27:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.869 12:27:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.869 12:27:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.869 12:27:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.869 12:27:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:05:58.869 12:27:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=fd0a5f7a-4d1c-4902-ae17-94c770fe00e0 00:05:58.869 12:27:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.869 12:27:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.869 12:27:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.869 12:27:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.869 12:27:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.869 12:27:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.869 12:27:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.869 12:27:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.869 12:27:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.870 12:27:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.870 12:27:07 -- paths/export.sh@5 -- # export PATH 00:05:58.870 12:27:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.870 12:27:07 -- nvmf/common.sh@46 -- # : 0 00:05:58.870 12:27:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:58.870 12:27:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:58.870 12:27:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:58.870 12:27:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.870 12:27:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.870 12:27:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:58.870 12:27:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:58.870 12:27:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.870 INFO: launching applications... 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=57366 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:58.870 Waiting for target to run... 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 57366 /var/tmp/spdk_tgt.sock 00:05:58.870 12:27:07 -- common/autotest_common.sh@819 -- # '[' -z 57366 ']' 00:05:58.870 12:27:07 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.870 12:27:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.870 12:27:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.870 12:27:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.870 12:27:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.870 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.870 [2024-05-15 12:27:07.790896] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.870 [2024-05-15 12:27:07.791063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:05:59.436 [2024-05-15 12:27:08.274044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.694 [2024-05-15 12:27:08.497783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.694 [2024-05-15 12:27:08.498042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.629 12:27:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.629 12:27:09 -- common/autotest_common.sh@852 -- # return 0 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:00.629 00:06:00.629 INFO: shutting down applications... 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 57366 ]] 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 57366 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:00.629 12:27:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:01.197 12:27:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:01.197 12:27:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:01.197 12:27:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:01.197 12:27:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:01.762 12:27:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:01.762 12:27:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:01.762 12:27:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:01.762 12:27:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:02.020 12:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:02.020 12:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:02.020 12:27:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:02.020 12:27:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:02.582 12:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:02.582 12:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:02.582 12:27:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:02.582 12:27:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:03.148 12:27:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:03.148 12:27:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.148 12:27:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:03.148 12:27:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57366 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:03.715 SPDK target shutdown done 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:03.715 Success 00:06:03.715 12:27:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:03.715 00:06:03.715 real 0m4.917s 00:06:03.715 user 0m4.562s 00:06:03.715 sys 0m0.687s 00:06:03.715 12:27:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.715 12:27:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.715 ************************************ 00:06:03.715 END TEST json_config_extra_key 00:06:03.715 ************************************ 00:06:03.715 12:27:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.715 12:27:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.715 12:27:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.715 12:27:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.715 ************************************ 00:06:03.715 START TEST alias_rpc 00:06:03.715 ************************************ 00:06:03.715 12:27:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.715 * Looking for test storage... 00:06:03.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:03.715 12:27:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.715 12:27:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57476 00:06:03.715 12:27:12 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.715 12:27:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57476 00:06:03.715 12:27:12 -- common/autotest_common.sh@819 -- # '[' -z 57476 ']' 00:06:03.715 12:27:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.715 12:27:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.715 12:27:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.715 12:27:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.715 12:27:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.973 [2024-05-15 12:27:12.765184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.974 [2024-05-15 12:27:12.765368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57476 ] 00:06:03.974 [2024-05-15 12:27:12.944190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.539 [2024-05-15 12:27:13.246737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.539 [2024-05-15 12:27:13.247032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.482 12:27:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.482 12:27:14 -- common/autotest_common.sh@852 -- # return 0 00:06:05.482 12:27:14 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:05.740 12:27:14 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57476 00:06:05.740 12:27:14 -- common/autotest_common.sh@926 -- # '[' -z 57476 ']' 00:06:05.740 12:27:14 -- common/autotest_common.sh@930 -- # kill -0 57476 00:06:05.740 12:27:14 -- common/autotest_common.sh@931 -- # uname 00:06:05.740 12:27:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.740 12:27:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57476 00:06:05.740 12:27:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.740 killing process with pid 57476 00:06:05.740 12:27:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.740 12:27:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57476' 00:06:05.740 12:27:14 -- common/autotest_common.sh@945 -- # kill 57476 00:06:05.740 12:27:14 -- common/autotest_common.sh@950 -- # wait 57476 00:06:08.271 ************************************ 00:06:08.271 END TEST alias_rpc 00:06:08.271 ************************************ 00:06:08.271 00:06:08.271 real 0m4.293s 00:06:08.271 user 0m4.480s 00:06:08.271 sys 0m0.657s 00:06:08.271 12:27:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.271 12:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 12:27:16 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:08.271 12:27:16 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:08.271 12:27:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.271 12:27:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.271 12:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 ************************************ 00:06:08.271 START TEST spdkcli_tcp 00:06:08.271 ************************************ 00:06:08.271 12:27:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:08.271 * Looking for test storage... 00:06:08.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:08.271 12:27:16 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:08.271 12:27:16 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:08.271 12:27:16 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:08.271 12:27:16 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:08.271 12:27:16 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:08.271 12:27:16 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:08.271 12:27:16 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:08.271 12:27:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:08.271 12:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 12:27:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57576 00:06:08.271 12:27:17 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:08.271 12:27:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 57576 00:06:08.271 12:27:17 -- common/autotest_common.sh@819 -- # '[' -z 57576 ']' 00:06:08.271 12:27:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.271 12:27:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.271 12:27:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.271 12:27:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.271 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:06:08.271 [2024-05-15 12:27:17.123114] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:08.271 [2024-05-15 12:27:17.123321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57576 ] 00:06:08.538 [2024-05-15 12:27:17.298216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.538 [2024-05-15 12:27:17.547695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.796 [2024-05-15 12:27:17.548184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.796 [2024-05-15 12:27:17.548201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.171 12:27:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.171 12:27:18 -- common/autotest_common.sh@852 -- # return 0 00:06:10.171 12:27:18 -- spdkcli/tcp.sh@31 -- # socat_pid=57606 00:06:10.171 12:27:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:10.171 12:27:18 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.171 [ 00:06:10.171 "bdev_malloc_delete", 00:06:10.171 "bdev_malloc_create", 00:06:10.171 "bdev_null_resize", 00:06:10.171 "bdev_null_delete", 00:06:10.171 "bdev_null_create", 00:06:10.171 "bdev_nvme_cuse_unregister", 00:06:10.171 "bdev_nvme_cuse_register", 00:06:10.171 "bdev_opal_new_user", 00:06:10.171 "bdev_opal_set_lock_state", 00:06:10.171 "bdev_opal_delete", 00:06:10.171 "bdev_opal_get_info", 00:06:10.171 "bdev_opal_create", 00:06:10.171 "bdev_nvme_opal_revert", 00:06:10.171 "bdev_nvme_opal_init", 00:06:10.171 "bdev_nvme_send_cmd", 00:06:10.171 "bdev_nvme_get_path_iostat", 00:06:10.171 "bdev_nvme_get_mdns_discovery_info", 00:06:10.171 "bdev_nvme_stop_mdns_discovery", 00:06:10.171 "bdev_nvme_start_mdns_discovery", 00:06:10.171 "bdev_nvme_set_multipath_policy", 00:06:10.171 "bdev_nvme_set_preferred_path", 00:06:10.171 "bdev_nvme_get_io_paths", 00:06:10.171 "bdev_nvme_remove_error_injection", 00:06:10.171 "bdev_nvme_add_error_injection", 00:06:10.171 "bdev_nvme_get_discovery_info", 00:06:10.171 "bdev_nvme_stop_discovery", 00:06:10.171 "bdev_nvme_start_discovery", 00:06:10.171 "bdev_nvme_get_controller_health_info", 00:06:10.171 "bdev_nvme_disable_controller", 00:06:10.171 "bdev_nvme_enable_controller", 00:06:10.171 "bdev_nvme_reset_controller", 00:06:10.171 "bdev_nvme_get_transport_statistics", 00:06:10.171 "bdev_nvme_apply_firmware", 00:06:10.171 "bdev_nvme_detach_controller", 00:06:10.171 "bdev_nvme_get_controllers", 00:06:10.171 "bdev_nvme_attach_controller", 00:06:10.171 "bdev_nvme_set_hotplug", 00:06:10.171 "bdev_nvme_set_options", 00:06:10.171 "bdev_passthru_delete", 00:06:10.171 "bdev_passthru_create", 00:06:10.171 "bdev_lvol_grow_lvstore", 00:06:10.171 "bdev_lvol_get_lvols", 00:06:10.171 "bdev_lvol_get_lvstores", 00:06:10.171 "bdev_lvol_delete", 00:06:10.171 "bdev_lvol_set_read_only", 00:06:10.171 "bdev_lvol_resize", 00:06:10.171 "bdev_lvol_decouple_parent", 00:06:10.171 "bdev_lvol_inflate", 00:06:10.171 "bdev_lvol_rename", 00:06:10.171 "bdev_lvol_clone_bdev", 00:06:10.171 "bdev_lvol_clone", 00:06:10.171 "bdev_lvol_snapshot", 00:06:10.171 "bdev_lvol_create", 00:06:10.171 "bdev_lvol_delete_lvstore", 00:06:10.171 "bdev_lvol_rename_lvstore", 00:06:10.171 "bdev_lvol_create_lvstore", 00:06:10.171 "bdev_raid_set_options", 00:06:10.171 "bdev_raid_remove_base_bdev", 00:06:10.171 "bdev_raid_add_base_bdev", 00:06:10.171 "bdev_raid_delete", 00:06:10.171 "bdev_raid_create", 00:06:10.171 "bdev_raid_get_bdevs", 00:06:10.171 "bdev_error_inject_error", 00:06:10.171 "bdev_error_delete", 00:06:10.171 "bdev_error_create", 00:06:10.171 "bdev_split_delete", 00:06:10.171 "bdev_split_create", 00:06:10.171 "bdev_delay_delete", 00:06:10.171 "bdev_delay_create", 00:06:10.171 "bdev_delay_update_latency", 00:06:10.171 "bdev_zone_block_delete", 00:06:10.171 "bdev_zone_block_create", 00:06:10.171 "blobfs_create", 00:06:10.171 "blobfs_detect", 00:06:10.171 "blobfs_set_cache_size", 00:06:10.171 "bdev_xnvme_delete", 00:06:10.171 "bdev_xnvme_create", 00:06:10.171 "bdev_aio_delete", 00:06:10.172 "bdev_aio_rescan", 00:06:10.172 "bdev_aio_create", 00:06:10.172 "bdev_ftl_set_property", 00:06:10.172 "bdev_ftl_get_properties", 00:06:10.172 "bdev_ftl_get_stats", 00:06:10.172 "bdev_ftl_unmap", 00:06:10.172 "bdev_ftl_unload", 00:06:10.172 "bdev_ftl_delete", 00:06:10.172 "bdev_ftl_load", 00:06:10.172 "bdev_ftl_create", 00:06:10.172 "bdev_virtio_attach_controller", 00:06:10.172 "bdev_virtio_scsi_get_devices", 00:06:10.172 "bdev_virtio_detach_controller", 00:06:10.172 "bdev_virtio_blk_set_hotplug", 00:06:10.172 "bdev_iscsi_delete", 00:06:10.172 "bdev_iscsi_create", 00:06:10.172 "bdev_iscsi_set_options", 00:06:10.172 "accel_error_inject_error", 00:06:10.172 "ioat_scan_accel_module", 00:06:10.172 "dsa_scan_accel_module", 00:06:10.172 "iaa_scan_accel_module", 00:06:10.172 "iscsi_set_options", 00:06:10.172 "iscsi_get_auth_groups", 00:06:10.172 "iscsi_auth_group_remove_secret", 00:06:10.172 "iscsi_auth_group_add_secret", 00:06:10.172 "iscsi_delete_auth_group", 00:06:10.172 "iscsi_create_auth_group", 00:06:10.172 "iscsi_set_discovery_auth", 00:06:10.172 "iscsi_get_options", 00:06:10.172 "iscsi_target_node_request_logout", 00:06:10.172 "iscsi_target_node_set_redirect", 00:06:10.172 "iscsi_target_node_set_auth", 00:06:10.172 "iscsi_target_node_add_lun", 00:06:10.172 "iscsi_get_connections", 00:06:10.172 "iscsi_portal_group_set_auth", 00:06:10.172 "iscsi_start_portal_group", 00:06:10.172 "iscsi_delete_portal_group", 00:06:10.172 "iscsi_create_portal_group", 00:06:10.172 "iscsi_get_portal_groups", 00:06:10.172 "iscsi_delete_target_node", 00:06:10.172 "iscsi_target_node_remove_pg_ig_maps", 00:06:10.172 "iscsi_target_node_add_pg_ig_maps", 00:06:10.172 "iscsi_create_target_node", 00:06:10.172 "iscsi_get_target_nodes", 00:06:10.172 "iscsi_delete_initiator_group", 00:06:10.172 "iscsi_initiator_group_remove_initiators", 00:06:10.172 "iscsi_initiator_group_add_initiators", 00:06:10.172 "iscsi_create_initiator_group", 00:06:10.172 "iscsi_get_initiator_groups", 00:06:10.172 "nvmf_set_crdt", 00:06:10.172 "nvmf_set_config", 00:06:10.172 "nvmf_set_max_subsystems", 00:06:10.172 "nvmf_subsystem_get_listeners", 00:06:10.172 "nvmf_subsystem_get_qpairs", 00:06:10.172 "nvmf_subsystem_get_controllers", 00:06:10.172 "nvmf_get_stats", 00:06:10.172 "nvmf_get_transports", 00:06:10.172 "nvmf_create_transport", 00:06:10.172 "nvmf_get_targets", 00:06:10.172 "nvmf_delete_target", 00:06:10.172 "nvmf_create_target", 00:06:10.172 "nvmf_subsystem_allow_any_host", 00:06:10.172 "nvmf_subsystem_remove_host", 00:06:10.172 "nvmf_subsystem_add_host", 00:06:10.172 "nvmf_subsystem_remove_ns", 00:06:10.172 "nvmf_subsystem_add_ns", 00:06:10.172 "nvmf_subsystem_listener_set_ana_state", 00:06:10.172 "nvmf_discovery_get_referrals", 00:06:10.172 "nvmf_discovery_remove_referral", 00:06:10.172 "nvmf_discovery_add_referral", 00:06:10.172 "nvmf_subsystem_remove_listener", 00:06:10.172 "nvmf_subsystem_add_listener", 00:06:10.172 "nvmf_delete_subsystem", 00:06:10.172 "nvmf_create_subsystem", 00:06:10.172 "nvmf_get_subsystems", 00:06:10.172 "env_dpdk_get_mem_stats", 00:06:10.172 "nbd_get_disks", 00:06:10.172 "nbd_stop_disk", 00:06:10.172 "nbd_start_disk", 00:06:10.172 "ublk_recover_disk", 00:06:10.172 "ublk_get_disks", 00:06:10.172 "ublk_stop_disk", 00:06:10.172 "ublk_start_disk", 00:06:10.172 "ublk_destroy_target", 00:06:10.172 "ublk_create_target", 00:06:10.172 "virtio_blk_create_transport", 00:06:10.172 "virtio_blk_get_transports", 00:06:10.172 "vhost_controller_set_coalescing", 00:06:10.172 "vhost_get_controllers", 00:06:10.172 "vhost_delete_controller", 00:06:10.172 "vhost_create_blk_controller", 00:06:10.172 "vhost_scsi_controller_remove_target", 00:06:10.172 "vhost_scsi_controller_add_target", 00:06:10.172 "vhost_start_scsi_controller", 00:06:10.172 "vhost_create_scsi_controller", 00:06:10.172 "thread_set_cpumask", 00:06:10.172 "framework_get_scheduler", 00:06:10.172 "framework_set_scheduler", 00:06:10.172 "framework_get_reactors", 00:06:10.172 "thread_get_io_channels", 00:06:10.172 "thread_get_pollers", 00:06:10.172 "thread_get_stats", 00:06:10.172 "framework_monitor_context_switch", 00:06:10.172 "spdk_kill_instance", 00:06:10.172 "log_enable_timestamps", 00:06:10.172 "log_get_flags", 00:06:10.172 "log_clear_flag", 00:06:10.172 "log_set_flag", 00:06:10.172 "log_get_level", 00:06:10.172 "log_set_level", 00:06:10.172 "log_get_print_level", 00:06:10.172 "log_set_print_level", 00:06:10.172 "framework_enable_cpumask_locks", 00:06:10.172 "framework_disable_cpumask_locks", 00:06:10.172 "framework_wait_init", 00:06:10.172 "framework_start_init", 00:06:10.172 "scsi_get_devices", 00:06:10.172 "bdev_get_histogram", 00:06:10.172 "bdev_enable_histogram", 00:06:10.172 "bdev_set_qos_limit", 00:06:10.172 "bdev_set_qd_sampling_period", 00:06:10.172 "bdev_get_bdevs", 00:06:10.172 "bdev_reset_iostat", 00:06:10.172 "bdev_get_iostat", 00:06:10.172 "bdev_examine", 00:06:10.172 "bdev_wait_for_examine", 00:06:10.172 "bdev_set_options", 00:06:10.172 "notify_get_notifications", 00:06:10.172 "notify_get_types", 00:06:10.172 "accel_get_stats", 00:06:10.172 "accel_set_options", 00:06:10.172 "accel_set_driver", 00:06:10.172 "accel_crypto_key_destroy", 00:06:10.172 "accel_crypto_keys_get", 00:06:10.172 "accel_crypto_key_create", 00:06:10.172 "accel_assign_opc", 00:06:10.172 "accel_get_module_info", 00:06:10.172 "accel_get_opc_assignments", 00:06:10.172 "vmd_rescan", 00:06:10.172 "vmd_remove_device", 00:06:10.172 "vmd_enable", 00:06:10.172 "sock_set_default_impl", 00:06:10.172 "sock_impl_set_options", 00:06:10.172 "sock_impl_get_options", 00:06:10.172 "iobuf_get_stats", 00:06:10.172 "iobuf_set_options", 00:06:10.172 "framework_get_pci_devices", 00:06:10.172 "framework_get_config", 00:06:10.172 "framework_get_subsystems", 00:06:10.172 "trace_get_info", 00:06:10.172 "trace_get_tpoint_group_mask", 00:06:10.172 "trace_disable_tpoint_group", 00:06:10.172 "trace_enable_tpoint_group", 00:06:10.172 "trace_clear_tpoint_mask", 00:06:10.172 "trace_set_tpoint_mask", 00:06:10.172 "spdk_get_version", 00:06:10.172 "rpc_get_methods" 00:06:10.172 ] 00:06:10.172 12:27:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:10.172 12:27:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:10.172 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.172 12:27:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.172 12:27:19 -- spdkcli/tcp.sh@38 -- # killprocess 57576 00:06:10.172 12:27:19 -- common/autotest_common.sh@926 -- # '[' -z 57576 ']' 00:06:10.172 12:27:19 -- common/autotest_common.sh@930 -- # kill -0 57576 00:06:10.172 12:27:19 -- common/autotest_common.sh@931 -- # uname 00:06:10.172 12:27:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.172 12:27:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57576 00:06:10.172 12:27:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.172 killing process with pid 57576 00:06:10.172 12:27:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.172 12:27:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57576' 00:06:10.172 12:27:19 -- common/autotest_common.sh@945 -- # kill 57576 00:06:10.172 12:27:19 -- common/autotest_common.sh@950 -- # wait 57576 00:06:12.699 00:06:12.699 real 0m4.435s 00:06:12.699 user 0m8.118s 00:06:12.699 sys 0m0.695s 00:06:12.699 12:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.699 ************************************ 00:06:12.699 END TEST spdkcli_tcp 00:06:12.699 ************************************ 00:06:12.699 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.699 12:27:21 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.699 12:27:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.699 12:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.699 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.699 ************************************ 00:06:12.699 START TEST dpdk_mem_utility 00:06:12.699 ************************************ 00:06:12.699 12:27:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.699 * Looking for test storage... 00:06:12.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:12.699 12:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.699 12:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57691 00:06:12.699 12:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.699 12:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57691 00:06:12.699 12:27:21 -- common/autotest_common.sh@819 -- # '[' -z 57691 ']' 00:06:12.699 12:27:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.699 12:27:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.699 12:27:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.700 12:27:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.700 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.700 [2024-05-15 12:27:21.612561] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:12.700 [2024-05-15 12:27:21.612734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57691 ] 00:06:12.987 [2024-05-15 12:27:21.789591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.245 [2024-05-15 12:27:22.043682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.245 [2024-05-15 12:27:22.043917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.620 12:27:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.620 12:27:23 -- common/autotest_common.sh@852 -- # return 0 00:06:14.620 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.620 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.620 12:27:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.620 12:27:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.620 { 00:06:14.620 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.620 } 00:06:14.620 12:27:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.620 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:14.620 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:14.620 1 heaps totaling size 820.000000 MiB 00:06:14.620 size: 820.000000 MiB heap id: 0 00:06:14.620 end heaps---------- 00:06:14.620 8 mempools totaling size 598.116089 MiB 00:06:14.620 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.620 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.620 size: 84.521057 MiB name: bdev_io_57691 00:06:14.620 size: 51.011292 MiB name: evtpool_57691 00:06:14.620 size: 50.003479 MiB name: msgpool_57691 00:06:14.620 size: 21.763794 MiB name: PDU_Pool 00:06:14.620 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.620 size: 0.026123 MiB name: Session_Pool 00:06:14.620 end mempools------- 00:06:14.620 6 memzones totaling size 4.142822 MiB 00:06:14.620 size: 1.000366 MiB name: RG_ring_0_57691 00:06:14.620 size: 1.000366 MiB name: RG_ring_1_57691 00:06:14.620 size: 1.000366 MiB name: RG_ring_4_57691 00:06:14.620 size: 1.000366 MiB name: RG_ring_5_57691 00:06:14.620 size: 0.125366 MiB name: RG_ring_2_57691 00:06:14.620 size: 0.015991 MiB name: RG_ring_3_57691 00:06:14.620 end memzones------- 00:06:14.620 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.620 heap id: 0 total size: 820.000000 MiB number of busy elements: 304 number of free elements: 18 00:06:14.620 list of free elements. size: 18.450562 MiB 00:06:14.620 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:14.620 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:14.620 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:14.620 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:14.620 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:14.620 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:14.620 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:14.620 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:14.620 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:14.620 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:14.620 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:14.620 element at address: 0x200000200000 with size: 0.829224 MiB 00:06:14.620 element at address: 0x20001b000000 with size: 0.564148 MiB 00:06:14.620 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:14.620 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:14.620 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:14.620 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:14.620 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:14.620 list of standard malloc elements. size: 199.285034 MiB 00:06:14.620 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:14.620 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:14.620 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:14.620 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:14.620 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:14.620 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:14.620 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:14.620 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:14.620 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:14.620 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:14.620 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:14.620 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:14.620 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:14.621 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:14.622 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:14.622 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:14.622 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:14.622 list of memzone associated elements. size: 602.264404 MiB 00:06:14.622 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:14.622 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.622 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:14.622 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.622 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:14.622 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57691_0 00:06:14.622 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:14.622 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57691_0 00:06:14.622 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:14.622 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57691_0 00:06:14.622 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:14.622 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.622 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:14.622 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.622 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:14.622 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57691 00:06:14.622 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:14.622 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57691 00:06:14.622 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:14.622 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57691 00:06:14.622 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:14.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.622 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:14.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.622 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:14.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.622 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:14.622 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.622 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:14.622 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57691 00:06:14.622 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:14.622 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57691 00:06:14.622 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:14.622 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57691 00:06:14.622 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:14.622 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57691 00:06:14.622 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:14.622 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57691 00:06:14.622 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:14.622 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.622 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:14.622 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.622 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:14.622 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.622 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:14.622 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57691 00:06:14.622 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:14.622 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.622 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:14.622 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.622 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:14.622 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57691 00:06:14.623 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:14.623 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.623 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:14.623 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57691 00:06:14.623 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:14.623 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57691 00:06:14.623 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:14.623 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.623 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.623 12:27:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57691 00:06:14.623 12:27:23 -- common/autotest_common.sh@926 -- # '[' -z 57691 ']' 00:06:14.623 12:27:23 -- common/autotest_common.sh@930 -- # kill -0 57691 00:06:14.623 12:27:23 -- common/autotest_common.sh@931 -- # uname 00:06:14.623 12:27:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.623 12:27:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57691 00:06:14.623 killing process with pid 57691 00:06:14.623 12:27:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.623 12:27:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.623 12:27:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57691' 00:06:14.623 12:27:23 -- common/autotest_common.sh@945 -- # kill 57691 00:06:14.623 12:27:23 -- common/autotest_common.sh@950 -- # wait 57691 00:06:17.153 00:06:17.153 real 0m4.250s 00:06:17.153 user 0m4.440s 00:06:17.153 sys 0m0.655s 00:06:17.153 ************************************ 00:06:17.153 END TEST dpdk_mem_utility 00:06:17.153 ************************************ 00:06:17.153 12:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.153 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.153 12:27:25 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.153 12:27:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.153 12:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.153 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.153 ************************************ 00:06:17.153 START TEST event 00:06:17.153 ************************************ 00:06:17.153 12:27:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.153 * Looking for test storage... 00:06:17.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.153 12:27:25 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.153 12:27:25 -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.153 12:27:25 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.153 12:27:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:17.153 12:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.153 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.153 ************************************ 00:06:17.153 START TEST event_perf 00:06:17.153 ************************************ 00:06:17.153 12:27:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.153 Running I/O for 1 seconds...[2024-05-15 12:27:25.829633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:17.153 [2024-05-15 12:27:25.829774] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:06:17.153 [2024-05-15 12:27:25.992953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.411 [2024-05-15 12:27:26.324656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.411 [2024-05-15 12:27:26.324746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.411 [2024-05-15 12:27:26.324861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.411 [2024-05-15 12:27:26.324875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.785 Running I/O for 1 seconds... 00:06:18.785 lcore 0: 180536 00:06:18.785 lcore 1: 180538 00:06:18.785 lcore 2: 180539 00:06:18.785 lcore 3: 180535 00:06:18.785 done. 00:06:18.785 00:06:18.785 real 0m1.879s 00:06:18.785 user 0m4.636s 00:06:18.785 sys 0m0.120s 00:06:18.785 12:27:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.785 ************************************ 00:06:18.785 END TEST event_perf 00:06:18.785 ************************************ 00:06:18.785 12:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.785 12:27:27 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.785 12:27:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:18.785 12:27:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.785 12:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.785 ************************************ 00:06:18.785 START TEST event_reactor 00:06:18.785 ************************************ 00:06:18.785 12:27:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.785 [2024-05-15 12:27:27.764216] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:18.785 [2024-05-15 12:27:27.764390] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:06:19.044 [2024-05-15 12:27:27.941447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.302 [2024-05-15 12:27:28.228174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.677 test_start 00:06:20.677 oneshot 00:06:20.677 tick 100 00:06:20.677 tick 100 00:06:20.677 tick 250 00:06:20.677 tick 100 00:06:20.677 tick 100 00:06:20.677 tick 100 00:06:20.677 tick 250 00:06:20.677 tick 500 00:06:20.677 tick 100 00:06:20.677 tick 100 00:06:20.677 tick 250 00:06:20.677 tick 100 00:06:20.677 tick 100 00:06:20.677 test_end 00:06:20.677 00:06:20.677 real 0m1.858s 00:06:20.677 user 0m1.627s 00:06:20.677 sys 0m0.120s 00:06:20.677 12:27:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.677 12:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.677 ************************************ 00:06:20.677 END TEST event_reactor 00:06:20.677 ************************************ 00:06:20.677 12:27:29 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.677 12:27:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:20.677 12:27:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.677 12:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.677 ************************************ 00:06:20.677 START TEST event_reactor_perf 00:06:20.677 ************************************ 00:06:20.677 12:27:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.677 [2024-05-15 12:27:29.672300] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:20.677 [2024-05-15 12:27:29.672472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:06:20.935 [2024-05-15 12:27:29.845704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.203 [2024-05-15 12:27:30.084475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.578 test_start 00:06:22.578 test_end 00:06:22.578 Performance: 279371 events per second 00:06:22.578 00:06:22.578 real 0m1.871s 00:06:22.578 user 0m1.642s 00:06:22.578 sys 0m0.118s 00:06:22.578 12:27:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.578 ************************************ 00:06:22.578 END TEST event_reactor_perf 00:06:22.578 ************************************ 00:06:22.578 12:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.578 12:27:31 -- event/event.sh@49 -- # uname -s 00:06:22.578 12:27:31 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.578 12:27:31 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.578 12:27:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.578 12:27:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.578 12:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.578 ************************************ 00:06:22.578 START TEST event_scheduler 00:06:22.578 ************************************ 00:06:22.578 12:27:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.835 * Looking for test storage... 00:06:22.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:22.835 12:27:31 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.835 12:27:31 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57946 00:06:22.835 12:27:31 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.835 12:27:31 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.835 12:27:31 -- scheduler/scheduler.sh@37 -- # waitforlisten 57946 00:06:22.835 12:27:31 -- common/autotest_common.sh@819 -- # '[' -z 57946 ']' 00:06:22.835 12:27:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.835 12:27:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.835 12:27:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.835 12:27:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.835 12:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.835 [2024-05-15 12:27:31.737136] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:22.835 [2024-05-15 12:27:31.737667] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57946 ] 00:06:23.093 [2024-05-15 12:27:31.916045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.352 [2024-05-15 12:27:32.160603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.352 [2024-05-15 12:27:32.160700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.352 [2024-05-15 12:27:32.160835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.352 [2024-05-15 12:27:32.160850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.917 12:27:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.917 12:27:32 -- common/autotest_common.sh@852 -- # return 0 00:06:23.917 12:27:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.917 12:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.917 12:27:32 -- common/autotest_common.sh@10 -- # set +x 00:06:23.917 POWER: Env isn't set yet! 00:06:23.917 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:23.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.917 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.917 POWER: Attempting to initialise PSTAT power management... 00:06:23.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.917 POWER: Cannot set governor of lcore 0 to performance 00:06:23.917 POWER: Attempting to initialise AMD PSTATE power management... 00:06:23.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.917 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.917 POWER: Attempting to initialise CPPC power management... 00:06:23.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.917 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.917 POWER: Attempting to initialise VM power management... 00:06:23.917 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:23.917 POWER: Unable to set Power Management Environment for lcore 0 00:06:23.917 [2024-05-15 12:27:32.655263] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:23.917 [2024-05-15 12:27:32.655288] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:23.917 [2024-05-15 12:27:32.655303] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.917 12:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.917 12:27:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.917 12:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.917 12:27:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 [2024-05-15 12:27:32.989048] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.175 12:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:32 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.175 12:27:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.175 12:27:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.175 12:27:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 ************************************ 00:06:24.175 START TEST scheduler_create_thread 00:06:24.175 ************************************ 00:06:24.175 12:27:33 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 2 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 3 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 4 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 5 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 6 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 7 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 8 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 9 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 10 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 12:27:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.175 12:27:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.175 12:27:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.175 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:06:25.109 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.109 12:27:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.109 12:27:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.109 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.109 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.481 ************************************ 00:06:26.481 END TEST scheduler_create_thread 00:06:26.481 ************************************ 00:06:26.481 12:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.481 00:06:26.481 real 0m2.138s 00:06:26.481 user 0m0.015s 00:06:26.481 sys 0m0.009s 00:06:26.481 12:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.481 12:27:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.481 12:27:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.481 12:27:35 -- scheduler/scheduler.sh@46 -- # killprocess 57946 00:06:26.481 12:27:35 -- common/autotest_common.sh@926 -- # '[' -z 57946 ']' 00:06:26.481 12:27:35 -- common/autotest_common.sh@930 -- # kill -0 57946 00:06:26.481 12:27:35 -- common/autotest_common.sh@931 -- # uname 00:06:26.481 12:27:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.481 12:27:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57946 00:06:26.481 killing process with pid 57946 00:06:26.481 12:27:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:26.481 12:27:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:26.481 12:27:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57946' 00:06:26.481 12:27:35 -- common/autotest_common.sh@945 -- # kill 57946 00:06:26.481 12:27:35 -- common/autotest_common.sh@950 -- # wait 57946 00:06:26.739 [2024-05-15 12:27:35.619114] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.110 00:06:28.110 real 0m5.268s 00:06:28.110 user 0m8.530s 00:06:28.110 sys 0m0.491s 00:06:28.110 12:27:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.110 ************************************ 00:06:28.110 END TEST event_scheduler 00:06:28.110 ************************************ 00:06:28.110 12:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:28.110 12:27:36 -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.110 12:27:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.110 12:27:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.110 12:27:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.110 12:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:28.110 ************************************ 00:06:28.110 START TEST app_repeat 00:06:28.110 ************************************ 00:06:28.110 12:27:36 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:28.110 12:27:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.110 12:27:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.110 12:27:36 -- event/event.sh@13 -- # local nbd_list 00:06:28.110 12:27:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.110 12:27:36 -- event/event.sh@14 -- # local bdev_list 00:06:28.110 12:27:36 -- event/event.sh@15 -- # local repeat_times=4 00:06:28.110 12:27:36 -- event/event.sh@17 -- # modprobe nbd 00:06:28.110 Process app_repeat pid: 58052 00:06:28.110 spdk_app_start Round 0 00:06:28.110 12:27:36 -- event/event.sh@19 -- # repeat_pid=58052 00:06:28.110 12:27:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.110 12:27:36 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.110 12:27:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58052' 00:06:28.110 12:27:36 -- event/event.sh@23 -- # for i in {0..2} 00:06:28.110 12:27:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.110 12:27:36 -- event/event.sh@25 -- # waitforlisten 58052 /var/tmp/spdk-nbd.sock 00:06:28.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.110 12:27:36 -- common/autotest_common.sh@819 -- # '[' -z 58052 ']' 00:06:28.110 12:27:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.110 12:27:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.110 12:27:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.110 12:27:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.110 12:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:28.110 [2024-05-15 12:27:36.950125] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:28.110 [2024-05-15 12:27:36.950279] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:06:28.367 [2024-05-15 12:27:37.128537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.367 [2024-05-15 12:27:37.372304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.367 [2024-05-15 12:27:37.372304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.932 12:27:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.932 12:27:37 -- common/autotest_common.sh@852 -- # return 0 00:06:28.932 12:27:37 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.189 Malloc0 00:06:29.447 12:27:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.705 Malloc1 00:06:29.705 12:27:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@12 -- # local i 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.705 12:27:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.972 /dev/nbd0 00:06:29.972 12:27:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.972 12:27:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.972 12:27:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:29.972 12:27:38 -- common/autotest_common.sh@857 -- # local i 00:06:29.972 12:27:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.972 12:27:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.972 12:27:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:29.972 12:27:38 -- common/autotest_common.sh@861 -- # break 00:06:29.972 12:27:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.972 12:27:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.972 12:27:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.972 1+0 records in 00:06:29.972 1+0 records out 00:06:29.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266264 s, 15.4 MB/s 00:06:29.972 12:27:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.972 12:27:38 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.972 12:27:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.972 12:27:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.972 12:27:38 -- common/autotest_common.sh@877 -- # return 0 00:06:29.972 12:27:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.972 12:27:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.972 12:27:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.230 /dev/nbd1 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.230 12:27:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:30.230 12:27:39 -- common/autotest_common.sh@857 -- # local i 00:06:30.230 12:27:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:30.230 12:27:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:30.230 12:27:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:30.230 12:27:39 -- common/autotest_common.sh@861 -- # break 00:06:30.230 12:27:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:30.230 12:27:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:30.230 12:27:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.230 1+0 records in 00:06:30.230 1+0 records out 00:06:30.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366849 s, 11.2 MB/s 00:06:30.230 12:27:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.230 12:27:39 -- common/autotest_common.sh@874 -- # size=4096 00:06:30.230 12:27:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.230 12:27:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:30.230 12:27:39 -- common/autotest_common.sh@877 -- # return 0 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.230 12:27:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.487 { 00:06:30.487 "nbd_device": "/dev/nbd0", 00:06:30.487 "bdev_name": "Malloc0" 00:06:30.487 }, 00:06:30.487 { 00:06:30.487 "nbd_device": "/dev/nbd1", 00:06:30.487 "bdev_name": "Malloc1" 00:06:30.487 } 00:06:30.487 ]' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.487 { 00:06:30.487 "nbd_device": "/dev/nbd0", 00:06:30.487 "bdev_name": "Malloc0" 00:06:30.487 }, 00:06:30.487 { 00:06:30.487 "nbd_device": "/dev/nbd1", 00:06:30.487 "bdev_name": "Malloc1" 00:06:30.487 } 00:06:30.487 ]' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.487 /dev/nbd1' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.487 /dev/nbd1' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.487 12:27:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.488 256+0 records in 00:06:30.488 256+0 records out 00:06:30.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677958 s, 155 MB/s 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.488 256+0 records in 00:06:30.488 256+0 records out 00:06:30.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298998 s, 35.1 MB/s 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.488 12:27:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.745 256+0 records in 00:06:30.745 256+0 records out 00:06:30.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03413 s, 30.7 MB/s 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@51 -- # local i 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@41 -- # break 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.745 12:27:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@41 -- # break 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.003 12:27:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@65 -- # true 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.568 12:27:40 -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.568 12:27:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.826 12:27:40 -- event/event.sh@35 -- # sleep 3 00:06:33.240 [2024-05-15 12:27:41.962860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.240 [2024-05-15 12:27:42.197085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.240 [2024-05-15 12:27:42.197092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.499 [2024-05-15 12:27:42.385879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.499 [2024-05-15 12:27:42.385990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.875 12:27:43 -- event/event.sh@23 -- # for i in {0..2} 00:06:34.875 spdk_app_start Round 1 00:06:34.875 12:27:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.875 12:27:43 -- event/event.sh@25 -- # waitforlisten 58052 /var/tmp/spdk-nbd.sock 00:06:34.875 12:27:43 -- common/autotest_common.sh@819 -- # '[' -z 58052 ']' 00:06:34.875 12:27:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.875 12:27:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.875 12:27:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.875 12:27:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.875 12:27:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.133 12:27:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.133 12:27:44 -- common/autotest_common.sh@852 -- # return 0 00:06:35.133 12:27:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.392 Malloc0 00:06:35.392 12:27:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.652 Malloc1 00:06:35.652 12:27:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.652 12:27:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.219 /dev/nbd0 00:06:36.219 12:27:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.219 12:27:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.219 12:27:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:36.219 12:27:44 -- common/autotest_common.sh@857 -- # local i 00:06:36.219 12:27:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:36.219 12:27:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:36.219 12:27:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:36.219 12:27:44 -- common/autotest_common.sh@861 -- # break 00:06:36.219 12:27:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:36.219 12:27:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:36.219 12:27:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.219 1+0 records in 00:06:36.219 1+0 records out 00:06:36.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316197 s, 13.0 MB/s 00:06:36.219 12:27:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.219 12:27:44 -- common/autotest_common.sh@874 -- # size=4096 00:06:36.219 12:27:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.219 12:27:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:36.219 12:27:44 -- common/autotest_common.sh@877 -- # return 0 00:06:36.219 12:27:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.219 12:27:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.219 12:27:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.219 /dev/nbd1 00:06:36.219 12:27:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.219 12:27:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.219 12:27:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:36.219 12:27:45 -- common/autotest_common.sh@857 -- # local i 00:06:36.219 12:27:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:36.219 12:27:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:36.219 12:27:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:36.478 12:27:45 -- common/autotest_common.sh@861 -- # break 00:06:36.478 12:27:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:36.478 12:27:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:36.478 12:27:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.478 1+0 records in 00:06:36.478 1+0 records out 00:06:36.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333317 s, 12.3 MB/s 00:06:36.478 12:27:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.478 12:27:45 -- common/autotest_common.sh@874 -- # size=4096 00:06:36.478 12:27:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.478 12:27:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:36.478 12:27:45 -- common/autotest_common.sh@877 -- # return 0 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.478 { 00:06:36.478 "nbd_device": "/dev/nbd0", 00:06:36.478 "bdev_name": "Malloc0" 00:06:36.478 }, 00:06:36.478 { 00:06:36.478 "nbd_device": "/dev/nbd1", 00:06:36.478 "bdev_name": "Malloc1" 00:06:36.478 } 00:06:36.478 ]' 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.478 { 00:06:36.478 "nbd_device": "/dev/nbd0", 00:06:36.478 "bdev_name": "Malloc0" 00:06:36.478 }, 00:06:36.478 { 00:06:36.478 "nbd_device": "/dev/nbd1", 00:06:36.478 "bdev_name": "Malloc1" 00:06:36.478 } 00:06:36.478 ]' 00:06:36.478 12:27:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.737 /dev/nbd1' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.737 /dev/nbd1' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.737 256+0 records in 00:06:36.737 256+0 records out 00:06:36.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681081 s, 154 MB/s 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.737 256+0 records in 00:06:36.737 256+0 records out 00:06:36.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328221 s, 31.9 MB/s 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.737 256+0 records in 00:06:36.737 256+0 records out 00:06:36.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030668 s, 34.2 MB/s 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.737 12:27:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@41 -- # break 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.997 12:27:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@41 -- # break 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.256 12:27:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@65 -- # true 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.514 12:27:46 -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.514 12:27:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.081 12:27:46 -- event/event.sh@35 -- # sleep 3 00:06:39.457 [2024-05-15 12:27:48.032050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.457 [2024-05-15 12:27:48.271598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.457 [2024-05-15 12:27:48.271629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.457 [2024-05-15 12:27:48.465340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.457 [2024-05-15 12:27:48.465478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.832 spdk_app_start Round 2 00:06:40.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.832 12:27:49 -- event/event.sh@23 -- # for i in {0..2} 00:06:40.832 12:27:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.832 12:27:49 -- event/event.sh@25 -- # waitforlisten 58052 /var/tmp/spdk-nbd.sock 00:06:40.832 12:27:49 -- common/autotest_common.sh@819 -- # '[' -z 58052 ']' 00:06:40.832 12:27:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.832 12:27:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.832 12:27:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.832 12:27:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.832 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:41.090 12:27:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.090 12:27:50 -- common/autotest_common.sh@852 -- # return 0 00:06:41.090 12:27:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.349 Malloc0 00:06:41.349 12:27:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.608 Malloc1 00:06:41.866 12:27:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@12 -- # local i 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.866 12:27:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.125 /dev/nbd0 00:06:42.125 12:27:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.125 12:27:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.125 12:27:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:42.125 12:27:50 -- common/autotest_common.sh@857 -- # local i 00:06:42.125 12:27:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:42.125 12:27:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:42.125 12:27:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:42.125 12:27:50 -- common/autotest_common.sh@861 -- # break 00:06:42.125 12:27:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:42.125 12:27:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:42.125 12:27:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.125 1+0 records in 00:06:42.125 1+0 records out 00:06:42.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555885 s, 7.4 MB/s 00:06:42.125 12:27:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.125 12:27:50 -- common/autotest_common.sh@874 -- # size=4096 00:06:42.125 12:27:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.125 12:27:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:42.125 12:27:50 -- common/autotest_common.sh@877 -- # return 0 00:06:42.125 12:27:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.125 12:27:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.125 12:27:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.383 /dev/nbd1 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.383 12:27:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:42.383 12:27:51 -- common/autotest_common.sh@857 -- # local i 00:06:42.383 12:27:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:42.383 12:27:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:42.383 12:27:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:42.383 12:27:51 -- common/autotest_common.sh@861 -- # break 00:06:42.383 12:27:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:42.383 12:27:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:42.383 12:27:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.383 1+0 records in 00:06:42.383 1+0 records out 00:06:42.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347866 s, 11.8 MB/s 00:06:42.383 12:27:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.383 12:27:51 -- common/autotest_common.sh@874 -- # size=4096 00:06:42.383 12:27:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.383 12:27:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:42.383 12:27:51 -- common/autotest_common.sh@877 -- # return 0 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.383 12:27:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.642 { 00:06:42.642 "nbd_device": "/dev/nbd0", 00:06:42.642 "bdev_name": "Malloc0" 00:06:42.642 }, 00:06:42.642 { 00:06:42.642 "nbd_device": "/dev/nbd1", 00:06:42.642 "bdev_name": "Malloc1" 00:06:42.642 } 00:06:42.642 ]' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.642 { 00:06:42.642 "nbd_device": "/dev/nbd0", 00:06:42.642 "bdev_name": "Malloc0" 00:06:42.642 }, 00:06:42.642 { 00:06:42.642 "nbd_device": "/dev/nbd1", 00:06:42.642 "bdev_name": "Malloc1" 00:06:42.642 } 00:06:42.642 ]' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.642 /dev/nbd1' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.642 /dev/nbd1' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.642 256+0 records in 00:06:42.642 256+0 records out 00:06:42.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00981496 s, 107 MB/s 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.642 256+0 records in 00:06:42.642 256+0 records out 00:06:42.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024918 s, 42.1 MB/s 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.642 256+0 records in 00:06:42.642 256+0 records out 00:06:42.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309984 s, 33.8 MB/s 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@51 -- # local i 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.642 12:27:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@41 -- # break 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.900 12:27:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@41 -- # break 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.159 12:27:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@65 -- # true 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.417 12:27:52 -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.417 12:27:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.986 12:27:52 -- event/event.sh@35 -- # sleep 3 00:06:45.367 [2024-05-15 12:27:54.130602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.636 [2024-05-15 12:27:54.398592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.636 [2024-05-15 12:27:54.398595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.636 [2024-05-15 12:27:54.615990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.636 [2024-05-15 12:27:54.616414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.007 12:27:55 -- event/event.sh@38 -- # waitforlisten 58052 /var/tmp/spdk-nbd.sock 00:06:47.007 12:27:55 -- common/autotest_common.sh@819 -- # '[' -z 58052 ']' 00:06:47.007 12:27:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.007 12:27:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.007 12:27:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.007 12:27:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.007 12:27:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.264 12:27:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.264 12:27:56 -- common/autotest_common.sh@852 -- # return 0 00:06:47.264 12:27:56 -- event/event.sh@39 -- # killprocess 58052 00:06:47.264 12:27:56 -- common/autotest_common.sh@926 -- # '[' -z 58052 ']' 00:06:47.264 12:27:56 -- common/autotest_common.sh@930 -- # kill -0 58052 00:06:47.264 12:27:56 -- common/autotest_common.sh@931 -- # uname 00:06:47.264 12:27:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.264 12:27:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58052 00:06:47.264 killing process with pid 58052 00:06:47.264 12:27:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.264 12:27:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.264 12:27:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58052' 00:06:47.264 12:27:56 -- common/autotest_common.sh@945 -- # kill 58052 00:06:47.264 12:27:56 -- common/autotest_common.sh@950 -- # wait 58052 00:06:48.637 spdk_app_start is called in Round 0. 00:06:48.637 Shutdown signal received, stop current app iteration 00:06:48.637 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:48.637 spdk_app_start is called in Round 1. 00:06:48.637 Shutdown signal received, stop current app iteration 00:06:48.637 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:48.637 spdk_app_start is called in Round 2. 00:06:48.637 Shutdown signal received, stop current app iteration 00:06:48.637 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:48.637 spdk_app_start is called in Round 3. 00:06:48.637 Shutdown signal received, stop current app iteration 00:06:48.637 ************************************ 00:06:48.637 END TEST app_repeat 00:06:48.637 ************************************ 00:06:48.637 12:27:57 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:48.637 12:27:57 -- event/event.sh@42 -- # return 0 00:06:48.637 00:06:48.637 real 0m20.386s 00:06:48.637 user 0m43.006s 00:06:48.637 sys 0m3.025s 00:06:48.637 12:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.637 12:27:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.637 12:27:57 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:48.637 12:27:57 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:48.637 12:27:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.637 12:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.637 12:27:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.637 ************************************ 00:06:48.637 START TEST cpu_locks 00:06:48.637 ************************************ 00:06:48.637 12:27:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:48.637 * Looking for test storage... 00:06:48.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:48.637 12:27:57 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.637 12:27:57 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.638 12:27:57 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.638 12:27:57 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.638 12:27:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.638 12:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.638 12:27:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.638 ************************************ 00:06:48.638 START TEST default_locks 00:06:48.638 ************************************ 00:06:48.638 12:27:57 -- common/autotest_common.sh@1104 -- # default_locks 00:06:48.638 12:27:57 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58507 00:06:48.638 12:27:57 -- event/cpu_locks.sh@47 -- # waitforlisten 58507 00:06:48.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.638 12:27:57 -- common/autotest_common.sh@819 -- # '[' -z 58507 ']' 00:06:48.638 12:27:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.638 12:27:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.638 12:27:57 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.638 12:27:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.638 12:27:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.638 12:27:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.638 [2024-05-15 12:27:57.528439] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:48.638 [2024-05-15 12:27:57.528611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58507 ] 00:06:48.896 [2024-05-15 12:27:57.692957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.153 [2024-05-15 12:27:58.007054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.153 [2024-05-15 12:27:58.007355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.551 12:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.551 12:27:59 -- common/autotest_common.sh@852 -- # return 0 00:06:50.551 12:27:59 -- event/cpu_locks.sh@49 -- # locks_exist 58507 00:06:50.551 12:27:59 -- event/cpu_locks.sh@22 -- # lslocks -p 58507 00:06:50.551 12:27:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.809 12:27:59 -- event/cpu_locks.sh@50 -- # killprocess 58507 00:06:50.809 12:27:59 -- common/autotest_common.sh@926 -- # '[' -z 58507 ']' 00:06:50.809 12:27:59 -- common/autotest_common.sh@930 -- # kill -0 58507 00:06:50.809 12:27:59 -- common/autotest_common.sh@931 -- # uname 00:06:50.809 12:27:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.809 12:27:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58507 00:06:50.809 killing process with pid 58507 00:06:50.809 12:27:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.809 12:27:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.809 12:27:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58507' 00:06:50.809 12:27:59 -- common/autotest_common.sh@945 -- # kill 58507 00:06:50.810 12:27:59 -- common/autotest_common.sh@950 -- # wait 58507 00:06:53.363 12:28:01 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58507 00:06:53.363 12:28:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.363 12:28:01 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58507 00:06:53.363 12:28:01 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:53.363 12:28:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.363 12:28:01 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:53.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.363 ERROR: process (pid: 58507) is no longer running 00:06:53.363 12:28:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.363 12:28:01 -- common/autotest_common.sh@643 -- # waitforlisten 58507 00:06:53.363 12:28:01 -- common/autotest_common.sh@819 -- # '[' -z 58507 ']' 00:06:53.363 12:28:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.363 12:28:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.363 12:28:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.363 12:28:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.363 12:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58507) - No such process 00:06:53.363 12:28:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.363 12:28:01 -- common/autotest_common.sh@852 -- # return 1 00:06:53.363 12:28:01 -- common/autotest_common.sh@643 -- # es=1 00:06:53.363 12:28:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.363 12:28:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:53.363 12:28:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.363 12:28:01 -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.363 12:28:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.363 12:28:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.363 12:28:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.363 00:06:53.363 real 0m4.600s 00:06:53.363 user 0m4.728s 00:06:53.363 sys 0m0.821s 00:06:53.363 12:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.363 12:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 ************************************ 00:06:53.363 END TEST default_locks 00:06:53.363 ************************************ 00:06:53.363 12:28:02 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.363 12:28:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.363 12:28:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.363 12:28:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 ************************************ 00:06:53.363 START TEST default_locks_via_rpc 00:06:53.363 ************************************ 00:06:53.364 12:28:02 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:53.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.364 12:28:02 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58584 00:06:53.364 12:28:02 -- event/cpu_locks.sh@63 -- # waitforlisten 58584 00:06:53.364 12:28:02 -- common/autotest_common.sh@819 -- # '[' -z 58584 ']' 00:06:53.364 12:28:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.364 12:28:02 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.364 12:28:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.364 12:28:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.364 12:28:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.364 12:28:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.364 [2024-05-15 12:28:02.162772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:53.364 [2024-05-15 12:28:02.162940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:06:53.364 [2024-05-15 12:28:02.336595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.622 [2024-05-15 12:28:02.592611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.622 [2024-05-15 12:28:02.592874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.996 12:28:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.996 12:28:03 -- common/autotest_common.sh@852 -- # return 0 00:06:54.996 12:28:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.996 12:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.996 12:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:54.996 12:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.996 12:28:03 -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.996 12:28:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.996 12:28:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.996 12:28:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.996 12:28:03 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.996 12:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.996 12:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:54.996 12:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.996 12:28:03 -- event/cpu_locks.sh@71 -- # locks_exist 58584 00:06:54.996 12:28:03 -- event/cpu_locks.sh@22 -- # lslocks -p 58584 00:06:54.996 12:28:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.562 12:28:04 -- event/cpu_locks.sh@73 -- # killprocess 58584 00:06:55.562 12:28:04 -- common/autotest_common.sh@926 -- # '[' -z 58584 ']' 00:06:55.562 12:28:04 -- common/autotest_common.sh@930 -- # kill -0 58584 00:06:55.562 12:28:04 -- common/autotest_common.sh@931 -- # uname 00:06:55.562 12:28:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.562 12:28:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58584 00:06:55.562 killing process with pid 58584 00:06:55.562 12:28:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:55.562 12:28:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:55.562 12:28:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58584' 00:06:55.562 12:28:04 -- common/autotest_common.sh@945 -- # kill 58584 00:06:55.562 12:28:04 -- common/autotest_common.sh@950 -- # wait 58584 00:06:58.130 00:06:58.130 real 0m4.505s 00:06:58.130 user 0m4.689s 00:06:58.130 sys 0m0.796s 00:06:58.130 12:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.130 12:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.130 ************************************ 00:06:58.130 END TEST default_locks_via_rpc 00:06:58.131 ************************************ 00:06:58.131 12:28:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.131 12:28:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.131 12:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.131 12:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.131 ************************************ 00:06:58.131 START TEST non_locking_app_on_locked_coremask 00:06:58.131 ************************************ 00:06:58.131 12:28:06 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:58.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.131 12:28:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58668 00:06:58.131 12:28:06 -- event/cpu_locks.sh@81 -- # waitforlisten 58668 /var/tmp/spdk.sock 00:06:58.131 12:28:06 -- common/autotest_common.sh@819 -- # '[' -z 58668 ']' 00:06:58.131 12:28:06 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.131 12:28:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.131 12:28:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.131 12:28:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.131 12:28:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.131 12:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.131 [2024-05-15 12:28:06.702171] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:58.131 [2024-05-15 12:28:06.702319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58668 ] 00:06:58.131 [2024-05-15 12:28:06.866110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.131 [2024-05-15 12:28:07.106526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.131 [2024-05-15 12:28:07.106761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.505 12:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.505 12:28:08 -- common/autotest_common.sh@852 -- # return 0 00:06:59.505 12:28:08 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.505 12:28:08 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58691 00:06:59.505 12:28:08 -- event/cpu_locks.sh@85 -- # waitforlisten 58691 /var/tmp/spdk2.sock 00:06:59.505 12:28:08 -- common/autotest_common.sh@819 -- # '[' -z 58691 ']' 00:06:59.505 12:28:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.505 12:28:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.505 12:28:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.505 12:28:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.505 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.505 [2024-05-15 12:28:08.467549] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:59.505 [2024-05-15 12:28:08.468075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:06:59.764 [2024-05-15 12:28:08.644424] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.764 [2024-05-15 12:28:08.648512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.330 [2024-05-15 12:28:09.136806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.330 [2024-05-15 12:28:09.137076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.245 12:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:02.245 12:28:10 -- common/autotest_common.sh@852 -- # return 0 00:07:02.245 12:28:10 -- event/cpu_locks.sh@87 -- # locks_exist 58668 00:07:02.245 12:28:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.245 12:28:10 -- event/cpu_locks.sh@22 -- # lslocks -p 58668 00:07:02.811 12:28:11 -- event/cpu_locks.sh@89 -- # killprocess 58668 00:07:02.811 12:28:11 -- common/autotest_common.sh@926 -- # '[' -z 58668 ']' 00:07:02.811 12:28:11 -- common/autotest_common.sh@930 -- # kill -0 58668 00:07:02.811 12:28:11 -- common/autotest_common.sh@931 -- # uname 00:07:02.811 12:28:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:02.811 12:28:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58668 00:07:02.811 killing process with pid 58668 00:07:02.811 12:28:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:02.811 12:28:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:02.811 12:28:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58668' 00:07:02.811 12:28:11 -- common/autotest_common.sh@945 -- # kill 58668 00:07:02.811 12:28:11 -- common/autotest_common.sh@950 -- # wait 58668 00:07:08.073 12:28:16 -- event/cpu_locks.sh@90 -- # killprocess 58691 00:07:08.073 12:28:16 -- common/autotest_common.sh@926 -- # '[' -z 58691 ']' 00:07:08.073 12:28:16 -- common/autotest_common.sh@930 -- # kill -0 58691 00:07:08.073 12:28:16 -- common/autotest_common.sh@931 -- # uname 00:07:08.073 12:28:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:08.073 12:28:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58691 00:07:08.073 killing process with pid 58691 00:07:08.073 12:28:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:08.073 12:28:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:08.073 12:28:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58691' 00:07:08.073 12:28:16 -- common/autotest_common.sh@945 -- # kill 58691 00:07:08.073 12:28:16 -- common/autotest_common.sh@950 -- # wait 58691 00:07:09.459 ************************************ 00:07:09.459 END TEST non_locking_app_on_locked_coremask 00:07:09.459 ************************************ 00:07:09.459 00:07:09.459 real 0m11.839s 00:07:09.459 user 0m12.682s 00:07:09.459 sys 0m1.485s 00:07:09.459 12:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.459 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.718 12:28:18 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.718 12:28:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.718 12:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.718 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.718 ************************************ 00:07:09.718 START TEST locking_app_on_unlocked_coremask 00:07:09.718 ************************************ 00:07:09.718 12:28:18 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:09.718 12:28:18 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58841 00:07:09.718 12:28:18 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.718 12:28:18 -- event/cpu_locks.sh@99 -- # waitforlisten 58841 /var/tmp/spdk.sock 00:07:09.718 12:28:18 -- common/autotest_common.sh@819 -- # '[' -z 58841 ']' 00:07:09.718 12:28:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.718 12:28:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.718 12:28:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.718 12:28:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.718 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.718 [2024-05-15 12:28:18.592730] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:09.718 [2024-05-15 12:28:18.592889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:07:09.977 [2024-05-15 12:28:18.754600] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.977 [2024-05-15 12:28:18.754663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.235 [2024-05-15 12:28:19.002200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.235 [2024-05-15 12:28:19.002435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.611 12:28:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.611 12:28:20 -- common/autotest_common.sh@852 -- # return 0 00:07:11.611 12:28:20 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58865 00:07:11.611 12:28:20 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.611 12:28:20 -- event/cpu_locks.sh@103 -- # waitforlisten 58865 /var/tmp/spdk2.sock 00:07:11.611 12:28:20 -- common/autotest_common.sh@819 -- # '[' -z 58865 ']' 00:07:11.611 12:28:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.611 12:28:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.611 12:28:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.611 12:28:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.611 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:07:11.611 [2024-05-15 12:28:20.386566] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:11.611 [2024-05-15 12:28:20.386725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58865 ] 00:07:11.611 [2024-05-15 12:28:20.570985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.177 [2024-05-15 12:28:21.060133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.177 [2024-05-15 12:28:21.060363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.098 12:28:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.098 12:28:22 -- common/autotest_common.sh@852 -- # return 0 00:07:14.098 12:28:22 -- event/cpu_locks.sh@105 -- # locks_exist 58865 00:07:14.098 12:28:22 -- event/cpu_locks.sh@22 -- # lslocks -p 58865 00:07:14.098 12:28:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.671 12:28:23 -- event/cpu_locks.sh@107 -- # killprocess 58841 00:07:14.671 12:28:23 -- common/autotest_common.sh@926 -- # '[' -z 58841 ']' 00:07:14.671 12:28:23 -- common/autotest_common.sh@930 -- # kill -0 58841 00:07:14.671 12:28:23 -- common/autotest_common.sh@931 -- # uname 00:07:14.671 12:28:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:14.671 12:28:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58841 00:07:14.671 12:28:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:14.671 12:28:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:14.671 killing process with pid 58841 00:07:14.671 12:28:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58841' 00:07:14.671 12:28:23 -- common/autotest_common.sh@945 -- # kill 58841 00:07:14.671 12:28:23 -- common/autotest_common.sh@950 -- # wait 58841 00:07:19.942 12:28:27 -- event/cpu_locks.sh@108 -- # killprocess 58865 00:07:19.942 12:28:27 -- common/autotest_common.sh@926 -- # '[' -z 58865 ']' 00:07:19.942 12:28:27 -- common/autotest_common.sh@930 -- # kill -0 58865 00:07:19.942 12:28:27 -- common/autotest_common.sh@931 -- # uname 00:07:19.942 12:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:19.942 12:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58865 00:07:19.942 12:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:19.942 killing process with pid 58865 00:07:19.942 12:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:19.942 12:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58865' 00:07:19.942 12:28:27 -- common/autotest_common.sh@945 -- # kill 58865 00:07:19.942 12:28:27 -- common/autotest_common.sh@950 -- # wait 58865 00:07:21.316 00:07:21.316 real 0m11.638s 00:07:21.316 user 0m12.415s 00:07:21.316 sys 0m1.491s 00:07:21.316 12:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.316 12:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.316 ************************************ 00:07:21.316 END TEST locking_app_on_unlocked_coremask 00:07:21.316 ************************************ 00:07:21.316 12:28:30 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:21.316 12:28:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:21.316 12:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.316 12:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.316 ************************************ 00:07:21.316 START TEST locking_app_on_locked_coremask 00:07:21.316 ************************************ 00:07:21.316 12:28:30 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:21.316 12:28:30 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59015 00:07:21.316 12:28:30 -- event/cpu_locks.sh@116 -- # waitforlisten 59015 /var/tmp/spdk.sock 00:07:21.316 12:28:30 -- common/autotest_common.sh@819 -- # '[' -z 59015 ']' 00:07:21.316 12:28:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.316 12:28:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.316 12:28:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.316 12:28:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.316 12:28:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.316 12:28:30 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.316 [2024-05-15 12:28:30.300144] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:21.316 [2024-05-15 12:28:30.300435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:07:21.575 [2024-05-15 12:28:30.477736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.833 [2024-05-15 12:28:30.723644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.833 [2024-05-15 12:28:30.723876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.209 12:28:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.209 12:28:31 -- common/autotest_common.sh@852 -- # return 0 00:07:23.209 12:28:31 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59039 00:07:23.209 12:28:31 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.209 12:28:31 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59039 /var/tmp/spdk2.sock 00:07:23.209 12:28:31 -- common/autotest_common.sh@640 -- # local es=0 00:07:23.209 12:28:31 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 59039 /var/tmp/spdk2.sock 00:07:23.209 12:28:31 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:23.209 12:28:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.209 12:28:31 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:23.209 12:28:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.209 12:28:31 -- common/autotest_common.sh@643 -- # waitforlisten 59039 /var/tmp/spdk2.sock 00:07:23.209 12:28:31 -- common/autotest_common.sh@819 -- # '[' -z 59039 ']' 00:07:23.209 12:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.209 12:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.209 12:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.209 12:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.209 12:28:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.209 [2024-05-15 12:28:32.033130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:23.209 [2024-05-15 12:28:32.033270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:07:23.209 [2024-05-15 12:28:32.209688] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59015 has claimed it. 00:07:23.209 [2024-05-15 12:28:32.209787] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:23.774 ERROR: process (pid: 59039) is no longer running 00:07:23.774 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (59039) - No such process 00:07:23.774 12:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.774 12:28:32 -- common/autotest_common.sh@852 -- # return 1 00:07:23.774 12:28:32 -- common/autotest_common.sh@643 -- # es=1 00:07:23.774 12:28:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:23.774 12:28:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:23.774 12:28:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:23.774 12:28:32 -- event/cpu_locks.sh@122 -- # locks_exist 59015 00:07:23.774 12:28:32 -- event/cpu_locks.sh@22 -- # lslocks -p 59015 00:07:23.774 12:28:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.338 12:28:33 -- event/cpu_locks.sh@124 -- # killprocess 59015 00:07:24.338 12:28:33 -- common/autotest_common.sh@926 -- # '[' -z 59015 ']' 00:07:24.338 12:28:33 -- common/autotest_common.sh@930 -- # kill -0 59015 00:07:24.338 12:28:33 -- common/autotest_common.sh@931 -- # uname 00:07:24.338 12:28:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:24.338 12:28:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59015 00:07:24.338 12:28:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:24.338 12:28:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:24.338 killing process with pid 59015 00:07:24.338 12:28:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59015' 00:07:24.338 12:28:33 -- common/autotest_common.sh@945 -- # kill 59015 00:07:24.338 12:28:33 -- common/autotest_common.sh@950 -- # wait 59015 00:07:26.870 00:07:26.870 real 0m5.184s 00:07:26.870 user 0m5.559s 00:07:26.870 sys 0m0.889s 00:07:26.870 12:28:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.870 12:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.870 ************************************ 00:07:26.870 END TEST locking_app_on_locked_coremask 00:07:26.870 ************************************ 00:07:26.870 12:28:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:26.870 12:28:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:26.870 12:28:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.870 12:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.870 ************************************ 00:07:26.870 START TEST locking_overlapped_coremask 00:07:26.870 ************************************ 00:07:26.870 12:28:35 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:26.870 12:28:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59108 00:07:26.870 12:28:35 -- event/cpu_locks.sh@133 -- # waitforlisten 59108 /var/tmp/spdk.sock 00:07:26.870 12:28:35 -- common/autotest_common.sh@819 -- # '[' -z 59108 ']' 00:07:26.870 12:28:35 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:26.870 12:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.870 12:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.870 12:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.870 12:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.870 12:28:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.870 [2024-05-15 12:28:35.533380] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:26.870 [2024-05-15 12:28:35.533596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:07:26.870 [2024-05-15 12:28:35.709193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.137 [2024-05-15 12:28:35.964423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:27.137 [2024-05-15 12:28:35.964810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.137 [2024-05-15 12:28:35.964919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.137 [2024-05-15 12:28:35.964947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.525 12:28:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:28.525 12:28:37 -- common/autotest_common.sh@852 -- # return 0 00:07:28.525 12:28:37 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59134 00:07:28.525 12:28:37 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:28.525 12:28:37 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59134 /var/tmp/spdk2.sock 00:07:28.525 12:28:37 -- common/autotest_common.sh@640 -- # local es=0 00:07:28.525 12:28:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 59134 /var/tmp/spdk2.sock 00:07:28.525 12:28:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:28.525 12:28:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:28.525 12:28:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:28.525 12:28:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:28.525 12:28:37 -- common/autotest_common.sh@643 -- # waitforlisten 59134 /var/tmp/spdk2.sock 00:07:28.525 12:28:37 -- common/autotest_common.sh@819 -- # '[' -z 59134 ']' 00:07:28.525 12:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.525 12:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:28.525 12:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.525 12:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:28.525 12:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:28.525 [2024-05-15 12:28:37.367088] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:28.525 [2024-05-15 12:28:37.367285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59134 ] 00:07:28.783 [2024-05-15 12:28:37.556627] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59108 has claimed it. 00:07:28.783 [2024-05-15 12:28:37.556753] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.041 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (59134) - No such process 00:07:29.041 ERROR: process (pid: 59134) is no longer running 00:07:29.041 12:28:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.041 12:28:37 -- common/autotest_common.sh@852 -- # return 1 00:07:29.041 12:28:37 -- common/autotest_common.sh@643 -- # es=1 00:07:29.041 12:28:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:29.041 12:28:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:29.041 12:28:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:29.041 12:28:37 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:29.041 12:28:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.041 12:28:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.041 12:28:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.041 12:28:37 -- event/cpu_locks.sh@141 -- # killprocess 59108 00:07:29.041 12:28:37 -- common/autotest_common.sh@926 -- # '[' -z 59108 ']' 00:07:29.041 12:28:37 -- common/autotest_common.sh@930 -- # kill -0 59108 00:07:29.041 12:28:37 -- common/autotest_common.sh@931 -- # uname 00:07:29.041 12:28:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.041 12:28:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59108 00:07:29.041 12:28:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.041 12:28:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.041 12:28:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59108' 00:07:29.041 killing process with pid 59108 00:07:29.041 12:28:38 -- common/autotest_common.sh@945 -- # kill 59108 00:07:29.041 12:28:38 -- common/autotest_common.sh@950 -- # wait 59108 00:07:31.575 00:07:31.575 real 0m4.859s 00:07:31.575 user 0m12.975s 00:07:31.575 sys 0m0.720s 00:07:31.575 12:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.575 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.575 ************************************ 00:07:31.575 END TEST locking_overlapped_coremask 00:07:31.575 ************************************ 00:07:31.575 12:28:40 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:31.575 12:28:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.575 12:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.575 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.575 ************************************ 00:07:31.575 START TEST locking_overlapped_coremask_via_rpc 00:07:31.575 ************************************ 00:07:31.575 12:28:40 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:31.575 12:28:40 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59198 00:07:31.575 12:28:40 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:31.575 12:28:40 -- event/cpu_locks.sh@149 -- # waitforlisten 59198 /var/tmp/spdk.sock 00:07:31.575 12:28:40 -- common/autotest_common.sh@819 -- # '[' -z 59198 ']' 00:07:31.575 12:28:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.575 12:28:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.575 12:28:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.575 12:28:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.575 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.575 [2024-05-15 12:28:40.449391] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:31.575 [2024-05-15 12:28:40.449797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59198 ] 00:07:31.834 [2024-05-15 12:28:40.616477] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.834 [2024-05-15 12:28:40.616591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.092 [2024-05-15 12:28:40.891014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.092 [2024-05-15 12:28:40.891636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.092 [2024-05-15 12:28:40.892962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.093 [2024-05-15 12:28:40.892962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.467 12:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.467 12:28:42 -- common/autotest_common.sh@852 -- # return 0 00:07:33.467 12:28:42 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59229 00:07:33.467 12:28:42 -- event/cpu_locks.sh@153 -- # waitforlisten 59229 /var/tmp/spdk2.sock 00:07:33.467 12:28:42 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:33.467 12:28:42 -- common/autotest_common.sh@819 -- # '[' -z 59229 ']' 00:07:33.467 12:28:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.467 12:28:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:33.467 12:28:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.467 12:28:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:33.467 12:28:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.467 [2024-05-15 12:28:42.159798] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:33.467 [2024-05-15 12:28:42.160708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:07:33.467 [2024-05-15 12:28:42.341327] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.467 [2024-05-15 12:28:42.341411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.034 [2024-05-15 12:28:42.843928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.034 [2024-05-15 12:28:42.845541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.034 [2024-05-15 12:28:42.847628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.034 [2024-05-15 12:28:42.847667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.972 12:28:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.972 12:28:44 -- common/autotest_common.sh@852 -- # return 0 00:07:35.972 12:28:44 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:35.972 12:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.972 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.972 12:28:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.972 12:28:44 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.972 12:28:44 -- common/autotest_common.sh@640 -- # local es=0 00:07:35.972 12:28:44 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.972 12:28:44 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:35.972 12:28:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:35.972 12:28:44 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:35.972 12:28:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:35.972 12:28:44 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.972 12:28:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.972 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.972 [2024-05-15 12:28:44.826829] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59198 has claimed it. 00:07:35.972 request: 00:07:35.972 { 00:07:35.972 "method": "framework_enable_cpumask_locks", 00:07:35.972 "req_id": 1 00:07:35.972 } 00:07:35.972 Got JSON-RPC error response 00:07:35.972 response: 00:07:35.972 { 00:07:35.972 "code": -32603, 00:07:35.972 "message": "Failed to claim CPU core: 2" 00:07:35.972 } 00:07:35.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.972 12:28:44 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:35.972 12:28:44 -- common/autotest_common.sh@643 -- # es=1 00:07:35.972 12:28:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:35.972 12:28:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:35.972 12:28:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:35.972 12:28:44 -- event/cpu_locks.sh@158 -- # waitforlisten 59198 /var/tmp/spdk.sock 00:07:35.972 12:28:44 -- common/autotest_common.sh@819 -- # '[' -z 59198 ']' 00:07:35.972 12:28:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.972 12:28:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.972 12:28:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.972 12:28:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.972 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.231 12:28:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.231 12:28:45 -- common/autotest_common.sh@852 -- # return 0 00:07:36.231 12:28:45 -- event/cpu_locks.sh@159 -- # waitforlisten 59229 /var/tmp/spdk2.sock 00:07:36.231 12:28:45 -- common/autotest_common.sh@819 -- # '[' -z 59229 ']' 00:07:36.231 12:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.231 12:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.231 12:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.231 12:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.231 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:07:36.490 12:28:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.490 12:28:45 -- common/autotest_common.sh@852 -- # return 0 00:07:36.490 12:28:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:36.490 12:28:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.490 12:28:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.490 12:28:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.490 00:07:36.490 real 0m5.084s 00:07:36.490 user 0m2.151s 00:07:36.490 sys 0m0.323s 00:07:36.490 12:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.490 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:07:36.490 ************************************ 00:07:36.490 END TEST locking_overlapped_coremask_via_rpc 00:07:36.490 ************************************ 00:07:36.490 12:28:45 -- event/cpu_locks.sh@174 -- # cleanup 00:07:36.490 12:28:45 -- event/cpu_locks.sh@15 -- # [[ -z 59198 ]] 00:07:36.490 12:28:45 -- event/cpu_locks.sh@15 -- # killprocess 59198 00:07:36.490 12:28:45 -- common/autotest_common.sh@926 -- # '[' -z 59198 ']' 00:07:36.490 12:28:45 -- common/autotest_common.sh@930 -- # kill -0 59198 00:07:36.490 12:28:45 -- common/autotest_common.sh@931 -- # uname 00:07:36.490 12:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.490 12:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59198 00:07:36.490 killing process with pid 59198 00:07:36.490 12:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.490 12:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.490 12:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59198' 00:07:36.490 12:28:45 -- common/autotest_common.sh@945 -- # kill 59198 00:07:36.490 12:28:45 -- common/autotest_common.sh@950 -- # wait 59198 00:07:39.023 12:28:47 -- event/cpu_locks.sh@16 -- # [[ -z 59229 ]] 00:07:39.023 12:28:47 -- event/cpu_locks.sh@16 -- # killprocess 59229 00:07:39.023 12:28:47 -- common/autotest_common.sh@926 -- # '[' -z 59229 ']' 00:07:39.023 12:28:47 -- common/autotest_common.sh@930 -- # kill -0 59229 00:07:39.023 12:28:47 -- common/autotest_common.sh@931 -- # uname 00:07:39.023 12:28:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.023 12:28:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59229 00:07:39.023 killing process with pid 59229 00:07:39.023 12:28:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:39.023 12:28:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:39.023 12:28:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59229' 00:07:39.023 12:28:47 -- common/autotest_common.sh@945 -- # kill 59229 00:07:39.023 12:28:47 -- common/autotest_common.sh@950 -- # wait 59229 00:07:41.556 12:28:49 -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.556 Process with pid 59198 is not found 00:07:41.556 Process with pid 59229 is not found 00:07:41.556 12:28:49 -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.556 12:28:49 -- event/cpu_locks.sh@15 -- # [[ -z 59198 ]] 00:07:41.556 12:28:49 -- event/cpu_locks.sh@15 -- # killprocess 59198 00:07:41.556 12:28:49 -- common/autotest_common.sh@926 -- # '[' -z 59198 ']' 00:07:41.556 12:28:49 -- common/autotest_common.sh@930 -- # kill -0 59198 00:07:41.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (59198) - No such process 00:07:41.556 12:28:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 59198 is not found' 00:07:41.556 12:28:49 -- event/cpu_locks.sh@16 -- # [[ -z 59229 ]] 00:07:41.556 12:28:49 -- event/cpu_locks.sh@16 -- # killprocess 59229 00:07:41.556 12:28:49 -- common/autotest_common.sh@926 -- # '[' -z 59229 ']' 00:07:41.556 12:28:49 -- common/autotest_common.sh@930 -- # kill -0 59229 00:07:41.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (59229) - No such process 00:07:41.556 12:28:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 59229 is not found' 00:07:41.556 12:28:49 -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.556 ************************************ 00:07:41.556 END TEST cpu_locks 00:07:41.556 ************************************ 00:07:41.556 00:07:41.556 real 0m52.628s 00:07:41.556 user 1m30.440s 00:07:41.556 sys 0m7.731s 00:07:41.556 12:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.556 12:28:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.557 ************************************ 00:07:41.557 END TEST event 00:07:41.557 ************************************ 00:07:41.557 00:07:41.557 real 1m24.282s 00:07:41.557 user 2m30.021s 00:07:41.557 sys 0m11.829s 00:07:41.557 12:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.557 12:28:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.557 12:28:50 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.557 12:28:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.557 12:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.557 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.557 ************************************ 00:07:41.557 START TEST thread 00:07:41.557 ************************************ 00:07:41.557 12:28:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.557 * Looking for test storage... 00:07:41.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:41.557 12:28:50 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.557 12:28:50 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:41.557 12:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.557 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.557 ************************************ 00:07:41.557 START TEST thread_poller_perf 00:07:41.557 ************************************ 00:07:41.557 12:28:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.557 [2024-05-15 12:28:50.156932] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:41.557 [2024-05-15 12:28:50.157075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:07:41.557 [2024-05-15 12:28:50.322239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.819 [2024-05-15 12:28:50.608160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:43.195 ====================================== 00:07:43.195 busy:2211553256 (cyc) 00:07:43.195 total_run_count: 287000 00:07:43.195 tsc_hz: 2200000000 (cyc) 00:07:43.195 ====================================== 00:07:43.195 poller_cost: 7705 (cyc), 3502 (nsec) 00:07:43.195 00:07:43.195 real 0m1.879s 00:07:43.195 user 0m1.622s 00:07:43.195 sys 0m0.143s 00:07:43.195 12:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.195 ************************************ 00:07:43.195 END TEST thread_poller_perf 00:07:43.195 ************************************ 00:07:43.195 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.195 12:28:52 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.195 12:28:52 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:43.195 12:28:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.195 12:28:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.195 ************************************ 00:07:43.195 START TEST thread_poller_perf 00:07:43.195 ************************************ 00:07:43.195 12:28:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.195 [2024-05-15 12:28:52.083179] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.195 [2024-05-15 12:28:52.083346] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:07:43.452 [2024-05-15 12:28:52.243857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.710 [2024-05-15 12:28:52.487385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.710 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:45.081 ====================================== 00:07:45.081 busy:2205004142 (cyc) 00:07:45.081 total_run_count: 3794000 00:07:45.081 tsc_hz: 2200000000 (cyc) 00:07:45.081 ====================================== 00:07:45.081 poller_cost: 581 (cyc), 264 (nsec) 00:07:45.081 00:07:45.081 real 0m1.822s 00:07:45.081 user 0m1.599s 00:07:45.081 sys 0m0.110s 00:07:45.081 12:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.082 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.082 ************************************ 00:07:45.082 END TEST thread_poller_perf 00:07:45.082 ************************************ 00:07:45.082 12:28:53 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:45.082 ************************************ 00:07:45.082 END TEST thread 00:07:45.082 ************************************ 00:07:45.082 00:07:45.082 real 0m3.876s 00:07:45.082 user 0m3.282s 00:07:45.082 sys 0m0.358s 00:07:45.082 12:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.082 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.082 12:28:53 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:45.082 12:28:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.082 12:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.082 12:28:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.082 ************************************ 00:07:45.082 START TEST accel 00:07:45.082 ************************************ 00:07:45.082 12:28:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:45.082 * Looking for test storage... 00:07:45.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:45.082 12:28:54 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:45.082 12:28:54 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:45.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.082 12:28:54 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:45.082 12:28:54 -- accel/accel.sh@59 -- # spdk_tgt_pid=59532 00:07:45.082 12:28:54 -- accel/accel.sh@60 -- # waitforlisten 59532 00:07:45.082 12:28:54 -- common/autotest_common.sh@819 -- # '[' -z 59532 ']' 00:07:45.082 12:28:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.082 12:28:54 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:45.082 12:28:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.082 12:28:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.082 12:28:54 -- accel/accel.sh@58 -- # build_accel_config 00:07:45.082 12:28:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.082 12:28:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.082 12:28:54 -- common/autotest_common.sh@10 -- # set +x 00:07:45.082 12:28:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.082 12:28:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.082 12:28:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.082 12:28:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.082 12:28:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.082 12:28:54 -- accel/accel.sh@42 -- # jq -r . 00:07:45.339 [2024-05-15 12:28:54.140037] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:45.339 [2024-05-15 12:28:54.140521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:07:45.339 [2024-05-15 12:28:54.307851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.596 [2024-05-15 12:28:54.601621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.596 [2024-05-15 12:28:54.601932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.968 12:28:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:46.968 12:28:55 -- common/autotest_common.sh@852 -- # return 0 00:07:46.968 12:28:55 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:46.968 12:28:55 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:46.968 12:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:46.968 12:28:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 12:28:55 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:46.968 12:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # IFS== 00:07:46.968 12:28:55 -- accel/accel.sh@64 -- # read -r opc module 00:07:46.968 12:28:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:46.968 12:28:55 -- accel/accel.sh@67 -- # killprocess 59532 00:07:46.968 12:28:55 -- common/autotest_common.sh@926 -- # '[' -z 59532 ']' 00:07:46.968 12:28:55 -- common/autotest_common.sh@930 -- # kill -0 59532 00:07:46.968 12:28:55 -- common/autotest_common.sh@931 -- # uname 00:07:46.968 12:28:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:46.968 12:28:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59532 00:07:46.968 12:28:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:46.968 12:28:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:46.968 12:28:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59532' 00:07:46.968 killing process with pid 59532 00:07:46.968 12:28:55 -- common/autotest_common.sh@945 -- # kill 59532 00:07:46.968 12:28:55 -- common/autotest_common.sh@950 -- # wait 59532 00:07:49.493 12:28:58 -- accel/accel.sh@68 -- # trap - ERR 00:07:49.493 12:28:58 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:49.493 12:28:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:49.493 12:28:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.493 12:28:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 12:28:58 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:49.493 12:28:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:49.493 12:28:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.493 12:28:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.493 12:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.493 12:28:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.493 12:28:58 -- accel/accel.sh@42 -- # jq -r . 00:07:49.493 12:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.493 12:28:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 12:28:58 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:49.493 12:28:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:49.493 12:28:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.493 12:28:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 ************************************ 00:07:49.493 START TEST accel_missing_filename 00:07:49.493 ************************************ 00:07:49.493 12:28:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:49.493 12:28:58 -- common/autotest_common.sh@640 -- # local es=0 00:07:49.493 12:28:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:49.493 12:28:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:49.493 12:28:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:49.493 12:28:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:49.493 12:28:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:49.493 12:28:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:49.493 12:28:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:49.493 12:28:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.493 12:28:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.493 12:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.493 12:28:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.493 12:28:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.493 12:28:58 -- accel/accel.sh@42 -- # jq -r . 00:07:49.493 [2024-05-15 12:28:58.344392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:49.493 [2024-05-15 12:28:58.344709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:07:49.751 [2024-05-15 12:28:58.522657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.008 [2024-05-15 12:28:58.766025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.009 [2024-05-15 12:28:59.004142] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.609 [2024-05-15 12:28:59.504181] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:50.866 A filename is required. 00:07:51.124 12:28:59 -- common/autotest_common.sh@643 -- # es=234 00:07:51.124 12:28:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:51.124 12:28:59 -- common/autotest_common.sh@652 -- # es=106 00:07:51.124 12:28:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:51.124 12:28:59 -- common/autotest_common.sh@660 -- # es=1 00:07:51.124 12:28:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:51.124 00:07:51.124 real 0m1.613s 00:07:51.124 user 0m1.335s 00:07:51.124 sys 0m0.220s 00:07:51.124 12:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.124 ************************************ 00:07:51.124 END TEST accel_missing_filename 00:07:51.124 ************************************ 00:07:51.124 12:28:59 -- common/autotest_common.sh@10 -- # set +x 00:07:51.124 12:28:59 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.124 12:28:59 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:51.124 12:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.124 12:28:59 -- common/autotest_common.sh@10 -- # set +x 00:07:51.124 ************************************ 00:07:51.124 START TEST accel_compress_verify 00:07:51.124 ************************************ 00:07:51.124 12:28:59 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.124 12:28:59 -- common/autotest_common.sh@640 -- # local es=0 00:07:51.124 12:28:59 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.124 12:28:59 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:51.124 12:28:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:51.124 12:28:59 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:51.124 12:28:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:51.124 12:28:59 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.125 12:28:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.125 12:28:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.125 12:28:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.125 12:28:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.125 12:28:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.125 12:28:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.125 12:28:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.125 12:28:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.125 12:28:59 -- accel/accel.sh@42 -- # jq -r . 00:07:51.125 [2024-05-15 12:28:59.993410] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:51.125 [2024-05-15 12:28:59.993700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59657 ] 00:07:51.382 [2024-05-15 12:29:00.178571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.640 [2024-05-15 12:29:00.421919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.640 [2024-05-15 12:29:00.627692] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.206 [2024-05-15 12:29:01.123320] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:52.770 00:07:52.770 Compression does not support the verify option, aborting. 00:07:52.770 12:29:01 -- common/autotest_common.sh@643 -- # es=161 00:07:52.770 12:29:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:52.770 12:29:01 -- common/autotest_common.sh@652 -- # es=33 00:07:52.770 12:29:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:52.770 12:29:01 -- common/autotest_common.sh@660 -- # es=1 00:07:52.770 12:29:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:52.770 00:07:52.770 real 0m1.567s 00:07:52.770 user 0m1.283s 00:07:52.770 sys 0m0.216s 00:07:52.770 12:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.770 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 ************************************ 00:07:52.770 END TEST accel_compress_verify 00:07:52.770 ************************************ 00:07:52.770 12:29:01 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:52.770 12:29:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:52.770 12:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.770 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 ************************************ 00:07:52.770 START TEST accel_wrong_workload 00:07:52.770 ************************************ 00:07:52.770 12:29:01 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:52.770 12:29:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:52.770 12:29:01 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:52.770 12:29:01 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.770 12:29:01 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:52.770 12:29:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:52.770 12:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.770 12:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.770 12:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.770 12:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.770 12:29:01 -- accel/accel.sh@42 -- # jq -r . 00:07:52.770 Unsupported workload type: foobar 00:07:52.770 [2024-05-15 12:29:01.600415] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:52.770 accel_perf options: 00:07:52.770 [-h help message] 00:07:52.770 [-q queue depth per core] 00:07:52.770 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:52.770 [-T number of threads per core 00:07:52.770 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:52.770 [-t time in seconds] 00:07:52.770 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:52.770 [ dif_verify, , dif_generate, dif_generate_copy 00:07:52.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:52.770 [-l for compress/decompress workloads, name of uncompressed input file 00:07:52.770 [-S for crc32c workload, use this seed value (default 0) 00:07:52.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:52.770 [-f for fill workload, use this BYTE value (default 255) 00:07:52.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:52.770 [-y verify result if this switch is on] 00:07:52.770 [-a tasks to allocate per core (default: same value as -q)] 00:07:52.770 Can be used to spread operations across a wider range of memory. 00:07:52.770 12:29:01 -- common/autotest_common.sh@643 -- # es=1 00:07:52.770 12:29:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:52.770 12:29:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:52.770 12:29:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:52.770 00:07:52.770 real 0m0.081s 00:07:52.770 user 0m0.086s 00:07:52.770 sys 0m0.044s 00:07:52.770 ************************************ 00:07:52.770 END TEST accel_wrong_workload 00:07:52.770 ************************************ 00:07:52.770 12:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.770 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 12:29:01 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:52.770 12:29:01 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:52.770 12:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.770 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:52.770 ************************************ 00:07:52.770 START TEST accel_negative_buffers 00:07:52.770 ************************************ 00:07:52.770 12:29:01 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:52.770 12:29:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:52.770 12:29:01 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:52.770 12:29:01 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:52.770 12:29:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:52.770 12:29:01 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:52.770 12:29:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:52.770 12:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.770 12:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.770 12:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.770 12:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.770 12:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.771 12:29:01 -- accel/accel.sh@42 -- # jq -r . 00:07:52.771 -x option must be non-negative. 00:07:52.771 [2024-05-15 12:29:01.724661] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:52.771 accel_perf options: 00:07:52.771 [-h help message] 00:07:52.771 [-q queue depth per core] 00:07:52.771 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:52.771 [-T number of threads per core 00:07:52.771 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:52.771 [-t time in seconds] 00:07:52.771 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:52.771 [ dif_verify, , dif_generate, dif_generate_copy 00:07:52.771 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:52.771 [-l for compress/decompress workloads, name of uncompressed input file 00:07:52.771 [-S for crc32c workload, use this seed value (default 0) 00:07:52.771 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:52.771 [-f for fill workload, use this BYTE value (default 255) 00:07:52.771 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:52.771 [-y verify result if this switch is on] 00:07:52.771 [-a tasks to allocate per core (default: same value as -q)] 00:07:52.771 Can be used to spread operations across a wider range of memory. 00:07:52.771 12:29:01 -- common/autotest_common.sh@643 -- # es=1 00:07:52.771 12:29:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:52.771 12:29:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:52.771 12:29:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:52.771 00:07:52.771 real 0m0.070s 00:07:52.771 user 0m0.082s 00:07:52.771 sys 0m0.039s 00:07:52.771 ************************************ 00:07:52.771 END TEST accel_negative_buffers 00:07:52.771 ************************************ 00:07:52.771 12:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.771 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:53.028 12:29:01 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:53.028 12:29:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:53.028 12:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.028 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:07:53.028 ************************************ 00:07:53.028 START TEST accel_crc32c 00:07:53.028 ************************************ 00:07:53.028 12:29:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:53.028 12:29:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.028 12:29:01 -- accel/accel.sh@17 -- # local accel_module 00:07:53.028 12:29:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:53.028 12:29:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:53.028 12:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.028 12:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.028 12:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.028 12:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.028 12:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.028 12:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.028 12:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.028 12:29:01 -- accel/accel.sh@42 -- # jq -r . 00:07:53.028 [2024-05-15 12:29:01.850148] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:53.028 [2024-05-15 12:29:01.850316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:07:53.028 [2024-05-15 12:29:02.020079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.286 [2024-05-15 12:29:02.255652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.814 12:29:04 -- accel/accel.sh@18 -- # out=' 00:07:55.814 SPDK Configuration: 00:07:55.814 Core mask: 0x1 00:07:55.814 00:07:55.814 Accel Perf Configuration: 00:07:55.814 Workload Type: crc32c 00:07:55.814 CRC-32C seed: 32 00:07:55.814 Transfer size: 4096 bytes 00:07:55.814 Vector count 1 00:07:55.814 Module: software 00:07:55.814 Queue depth: 32 00:07:55.814 Allocate depth: 32 00:07:55.814 # threads/core: 1 00:07:55.814 Run time: 1 seconds 00:07:55.814 Verify: Yes 00:07:55.814 00:07:55.814 Running for 1 seconds... 00:07:55.814 00:07:55.814 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.814 ------------------------------------------------------------------------------------ 00:07:55.814 0,0 395072/s 1543 MiB/s 0 0 00:07:55.814 ==================================================================================== 00:07:55.814 Total 395072/s 1543 MiB/s 0 0' 00:07:55.814 12:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:55.814 12:29:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:55.814 12:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:55.814 12:29:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:55.814 12:29:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.814 12:29:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.814 12:29:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.814 12:29:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.814 12:29:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.814 12:29:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.814 12:29:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.814 12:29:04 -- accel/accel.sh@42 -- # jq -r . 00:07:55.814 [2024-05-15 12:29:04.394034] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:55.814 [2024-05-15 12:29:04.394195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:07:55.814 [2024-05-15 12:29:04.570522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.814 [2024-05-15 12:29:04.809223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val=0x1 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val=crc32c 00:07:56.072 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.072 12:29:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.072 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.072 12:29:05 -- accel/accel.sh@21 -- # val=32 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val=software 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val=32 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val=32 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val=1 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val=Yes 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:56.073 12:29:05 -- accel/accel.sh@21 -- # val= 00:07:56.073 12:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:56.073 12:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@21 -- # val= 00:07:57.975 12:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:57.975 12:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:57.975 12:29:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.975 12:29:06 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:57.975 ************************************ 00:07:57.975 END TEST accel_crc32c 00:07:57.975 ************************************ 00:07:57.975 12:29:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.975 00:07:57.975 real 0m5.096s 00:07:57.975 user 0m4.462s 00:07:57.975 sys 0m0.418s 00:07:57.975 12:29:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.975 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:07:57.975 12:29:06 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:57.975 12:29:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:57.975 12:29:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.975 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:07:57.975 ************************************ 00:07:57.975 START TEST accel_crc32c_C2 00:07:57.975 ************************************ 00:07:57.975 12:29:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:57.975 12:29:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.975 12:29:06 -- accel/accel.sh@17 -- # local accel_module 00:07:57.975 12:29:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:57.975 12:29:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.975 12:29:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:57.975 12:29:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.975 12:29:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.975 12:29:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.975 12:29:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.975 12:29:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.975 12:29:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.975 12:29:06 -- accel/accel.sh@42 -- # jq -r . 00:07:58.234 [2024-05-15 12:29:06.988028] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:58.234 [2024-05-15 12:29:06.988167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:07:58.234 [2024-05-15 12:29:07.155351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.492 [2024-05-15 12:29:07.421699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.020 12:29:09 -- accel/accel.sh@18 -- # out=' 00:08:01.020 SPDK Configuration: 00:08:01.020 Core mask: 0x1 00:08:01.020 00:08:01.020 Accel Perf Configuration: 00:08:01.020 Workload Type: crc32c 00:08:01.020 CRC-32C seed: 0 00:08:01.020 Transfer size: 4096 bytes 00:08:01.020 Vector count 2 00:08:01.020 Module: software 00:08:01.020 Queue depth: 32 00:08:01.020 Allocate depth: 32 00:08:01.020 # threads/core: 1 00:08:01.020 Run time: 1 seconds 00:08:01.020 Verify: Yes 00:08:01.020 00:08:01.020 Running for 1 seconds... 00:08:01.020 00:08:01.020 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.020 ------------------------------------------------------------------------------------ 00:08:01.020 0,0 311200/s 2431 MiB/s 0 0 00:08:01.020 ==================================================================================== 00:08:01.020 Total 311200/s 1215 MiB/s 0 0' 00:08:01.020 12:29:09 -- accel/accel.sh@20 -- # IFS=: 00:08:01.020 12:29:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:01.020 12:29:09 -- accel/accel.sh@20 -- # read -r var val 00:08:01.020 12:29:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:01.020 12:29:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.020 12:29:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.020 12:29:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.020 12:29:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.020 12:29:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.020 12:29:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.020 12:29:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.020 12:29:09 -- accel/accel.sh@42 -- # jq -r . 00:08:01.020 [2024-05-15 12:29:09.524527] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:01.020 [2024-05-15 12:29:09.524675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:08:01.020 [2024-05-15 12:29:09.689957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.020 [2024-05-15 12:29:09.927642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=0x1 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=crc32c 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=0 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=software 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=32 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=32 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=1 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val=Yes 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:01.279 12:29:10 -- accel/accel.sh@21 -- # val= 00:08:01.279 12:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # IFS=: 00:08:01.279 12:29:10 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:11 -- accel/accel.sh@21 -- # val= 00:08:03.180 12:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # IFS=: 00:08:03.180 12:29:11 -- accel/accel.sh@20 -- # read -r var val 00:08:03.180 12:29:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:03.180 12:29:12 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:03.180 12:29:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.180 00:08:03.180 real 0m5.066s 00:08:03.180 user 0m4.477s 00:08:03.180 sys 0m0.379s 00:08:03.180 ************************************ 00:08:03.180 END TEST accel_crc32c_C2 00:08:03.180 ************************************ 00:08:03.180 12:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.180 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.180 12:29:12 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:03.180 12:29:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:03.180 12:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.180 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.180 ************************************ 00:08:03.180 START TEST accel_copy 00:08:03.180 ************************************ 00:08:03.180 12:29:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:08:03.180 12:29:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.180 12:29:12 -- accel/accel.sh@17 -- # local accel_module 00:08:03.180 12:29:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:08:03.180 12:29:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:03.180 12:29:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.180 12:29:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.180 12:29:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.180 12:29:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.180 12:29:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.180 12:29:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.180 12:29:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.180 12:29:12 -- accel/accel.sh@42 -- # jq -r . 00:08:03.180 [2024-05-15 12:29:12.108419] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:03.180 [2024-05-15 12:29:12.109554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:08:03.439 [2024-05-15 12:29:12.282905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.697 [2024-05-15 12:29:12.519205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.618 12:29:14 -- accel/accel.sh@18 -- # out=' 00:08:05.618 SPDK Configuration: 00:08:05.618 Core mask: 0x1 00:08:05.618 00:08:05.618 Accel Perf Configuration: 00:08:05.618 Workload Type: copy 00:08:05.618 Transfer size: 4096 bytes 00:08:05.618 Vector count 1 00:08:05.618 Module: software 00:08:05.618 Queue depth: 32 00:08:05.618 Allocate depth: 32 00:08:05.618 # threads/core: 1 00:08:05.618 Run time: 1 seconds 00:08:05.618 Verify: Yes 00:08:05.618 00:08:05.618 Running for 1 seconds... 00:08:05.618 00:08:05.618 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:05.618 ------------------------------------------------------------------------------------ 00:08:05.618 0,0 235744/s 920 MiB/s 0 0 00:08:05.618 ==================================================================================== 00:08:05.618 Total 235744/s 920 MiB/s 0 0' 00:08:05.618 12:29:14 -- accel/accel.sh@20 -- # IFS=: 00:08:05.618 12:29:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:05.618 12:29:14 -- accel/accel.sh@20 -- # read -r var val 00:08:05.618 12:29:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:05.618 12:29:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.618 12:29:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.618 12:29:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.618 12:29:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.618 12:29:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.618 12:29:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.618 12:29:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.618 12:29:14 -- accel/accel.sh@42 -- # jq -r . 00:08:05.877 [2024-05-15 12:29:14.654985] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:05.877 [2024-05-15 12:29:14.655141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:08:05.877 [2024-05-15 12:29:14.819554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.135 [2024-05-15 12:29:15.096868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=0x1 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=copy 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=software 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=32 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=32 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=1 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val=Yes 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:06.394 12:29:15 -- accel/accel.sh@21 -- # val= 00:08:06.394 12:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # IFS=: 00:08:06.394 12:29:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@21 -- # val= 00:08:08.296 12:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # IFS=: 00:08:08.296 12:29:17 -- accel/accel.sh@20 -- # read -r var val 00:08:08.296 12:29:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:08.296 12:29:17 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:08:08.296 12:29:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.296 00:08:08.296 real 0m5.127s 00:08:08.296 user 0m4.542s 00:08:08.296 sys 0m0.370s 00:08:08.296 12:29:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.296 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.296 ************************************ 00:08:08.296 END TEST accel_copy 00:08:08.296 ************************************ 00:08:08.296 12:29:17 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:08.296 12:29:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:08.297 12:29:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.297 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.297 ************************************ 00:08:08.297 START TEST accel_fill 00:08:08.297 ************************************ 00:08:08.297 12:29:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:08.297 12:29:17 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.297 12:29:17 -- accel/accel.sh@17 -- # local accel_module 00:08:08.297 12:29:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:08.297 12:29:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:08.297 12:29:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.297 12:29:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.297 12:29:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.297 12:29:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.297 12:29:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.297 12:29:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.297 12:29:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.297 12:29:17 -- accel/accel.sh@42 -- # jq -r . 00:08:08.297 [2024-05-15 12:29:17.287576] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:08.297 [2024-05-15 12:29:17.287734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:08:08.602 [2024-05-15 12:29:17.462841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.862 [2024-05-15 12:29:17.705640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.393 12:29:19 -- accel/accel.sh@18 -- # out=' 00:08:11.393 SPDK Configuration: 00:08:11.393 Core mask: 0x1 00:08:11.393 00:08:11.393 Accel Perf Configuration: 00:08:11.393 Workload Type: fill 00:08:11.393 Fill pattern: 0x80 00:08:11.393 Transfer size: 4096 bytes 00:08:11.393 Vector count 1 00:08:11.393 Module: software 00:08:11.393 Queue depth: 64 00:08:11.393 Allocate depth: 64 00:08:11.393 # threads/core: 1 00:08:11.393 Run time: 1 seconds 00:08:11.393 Verify: Yes 00:08:11.393 00:08:11.393 Running for 1 seconds... 00:08:11.393 00:08:11.393 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.393 ------------------------------------------------------------------------------------ 00:08:11.393 0,0 373824/s 1460 MiB/s 0 0 00:08:11.393 ==================================================================================== 00:08:11.393 Total 373824/s 1460 MiB/s 0 0' 00:08:11.393 12:29:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.393 12:29:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.393 12:29:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.393 12:29:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:11.393 12:29:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.393 12:29:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.393 12:29:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.393 12:29:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.393 12:29:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.393 12:29:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.393 12:29:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.393 12:29:19 -- accel/accel.sh@42 -- # jq -r . 00:08:11.393 [2024-05-15 12:29:19.869488] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:11.393 [2024-05-15 12:29:19.869673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:08:11.393 [2024-05-15 12:29:20.039947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.393 [2024-05-15 12:29:20.277404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val=0x1 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val=fill 00:08:11.652 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.652 12:29:20 -- accel/accel.sh@24 -- # accel_opc=fill 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.652 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.652 12:29:20 -- accel/accel.sh@21 -- # val=0x80 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val=software 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val=64 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val=64 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val=1 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val=Yes 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:11.653 12:29:20 -- accel/accel.sh@21 -- # val= 00:08:11.653 12:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # IFS=: 00:08:11.653 12:29:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 12:29:22 -- accel/accel.sh@21 -- # val= 00:08:13.604 12:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # IFS=: 00:08:13.604 12:29:22 -- accel/accel.sh@20 -- # read -r var val 00:08:13.604 ************************************ 00:08:13.604 END TEST accel_fill 00:08:13.604 ************************************ 00:08:13.604 12:29:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.604 12:29:22 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:08:13.604 12:29:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.604 00:08:13.604 real 0m5.142s 00:08:13.604 user 0m4.524s 00:08:13.604 sys 0m0.406s 00:08:13.604 12:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.604 12:29:22 -- common/autotest_common.sh@10 -- # set +x 00:08:13.604 12:29:22 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:13.604 12:29:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:13.604 12:29:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.604 12:29:22 -- common/autotest_common.sh@10 -- # set +x 00:08:13.604 ************************************ 00:08:13.604 START TEST accel_copy_crc32c 00:08:13.604 ************************************ 00:08:13.604 12:29:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:08:13.604 12:29:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.604 12:29:22 -- accel/accel.sh@17 -- # local accel_module 00:08:13.604 12:29:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:13.604 12:29:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:13.604 12:29:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.604 12:29:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.604 12:29:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.604 12:29:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.604 12:29:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.604 12:29:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.604 12:29:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.604 12:29:22 -- accel/accel.sh@42 -- # jq -r . 00:08:13.604 [2024-05-15 12:29:22.474978] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:13.604 [2024-05-15 12:29:22.475121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60044 ] 00:08:13.864 [2024-05-15 12:29:22.639243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.123 [2024-05-15 12:29:22.882501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.024 12:29:24 -- accel/accel.sh@18 -- # out=' 00:08:16.024 SPDK Configuration: 00:08:16.024 Core mask: 0x1 00:08:16.024 00:08:16.024 Accel Perf Configuration: 00:08:16.024 Workload Type: copy_crc32c 00:08:16.024 CRC-32C seed: 0 00:08:16.024 Vector size: 4096 bytes 00:08:16.024 Transfer size: 4096 bytes 00:08:16.024 Vector count 1 00:08:16.024 Module: software 00:08:16.024 Queue depth: 32 00:08:16.024 Allocate depth: 32 00:08:16.024 # threads/core: 1 00:08:16.024 Run time: 1 seconds 00:08:16.024 Verify: Yes 00:08:16.024 00:08:16.024 Running for 1 seconds... 00:08:16.024 00:08:16.024 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:16.024 ------------------------------------------------------------------------------------ 00:08:16.024 0,0 195328/s 763 MiB/s 0 0 00:08:16.024 ==================================================================================== 00:08:16.024 Total 195328/s 763 MiB/s 0 0' 00:08:16.024 12:29:24 -- accel/accel.sh@20 -- # IFS=: 00:08:16.024 12:29:24 -- accel/accel.sh@20 -- # read -r var val 00:08:16.024 12:29:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:16.024 12:29:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:16.024 12:29:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.024 12:29:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.024 12:29:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.024 12:29:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.024 12:29:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.024 12:29:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.024 12:29:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.024 12:29:24 -- accel/accel.sh@42 -- # jq -r . 00:08:16.283 [2024-05-15 12:29:25.035206] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:16.283 [2024-05-15 12:29:25.035419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60076 ] 00:08:16.283 [2024-05-15 12:29:25.211144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.541 [2024-05-15 12:29:25.465531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=0x1 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=0 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=software 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@23 -- # accel_module=software 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=32 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=32 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=1 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val=Yes 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:16.800 12:29:25 -- accel/accel.sh@21 -- # val= 00:08:16.800 12:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # IFS=: 00:08:16.800 12:29:25 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@21 -- # val= 00:08:18.702 12:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # IFS=: 00:08:18.702 12:29:27 -- accel/accel.sh@20 -- # read -r var val 00:08:18.702 12:29:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:18.702 12:29:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:18.702 12:29:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.702 00:08:18.702 real 0m5.107s 00:08:18.702 user 0m4.521s 00:08:18.702 sys 0m0.372s 00:08:18.702 ************************************ 00:08:18.702 END TEST accel_copy_crc32c 00:08:18.702 ************************************ 00:08:18.702 12:29:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.702 12:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.702 12:29:27 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:18.702 12:29:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:18.702 12:29:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.702 12:29:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.702 ************************************ 00:08:18.702 START TEST accel_copy_crc32c_C2 00:08:18.702 ************************************ 00:08:18.702 12:29:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:18.702 12:29:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:18.702 12:29:27 -- accel/accel.sh@17 -- # local accel_module 00:08:18.702 12:29:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:18.702 12:29:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:18.702 12:29:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.702 12:29:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.702 12:29:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.702 12:29:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.702 12:29:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.702 12:29:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.702 12:29:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.702 12:29:27 -- accel/accel.sh@42 -- # jq -r . 00:08:18.703 [2024-05-15 12:29:27.627098] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:18.703 [2024-05-15 12:29:27.627245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:08:19.054 [2024-05-15 12:29:27.790806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.054 [2024-05-15 12:29:28.030165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.590 12:29:30 -- accel/accel.sh@18 -- # out=' 00:08:21.590 SPDK Configuration: 00:08:21.590 Core mask: 0x1 00:08:21.590 00:08:21.590 Accel Perf Configuration: 00:08:21.590 Workload Type: copy_crc32c 00:08:21.590 CRC-32C seed: 0 00:08:21.590 Vector size: 4096 bytes 00:08:21.590 Transfer size: 8192 bytes 00:08:21.590 Vector count 2 00:08:21.590 Module: software 00:08:21.590 Queue depth: 32 00:08:21.590 Allocate depth: 32 00:08:21.590 # threads/core: 1 00:08:21.590 Run time: 1 seconds 00:08:21.590 Verify: Yes 00:08:21.590 00:08:21.590 Running for 1 seconds... 00:08:21.590 00:08:21.590 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:21.590 ------------------------------------------------------------------------------------ 00:08:21.590 0,0 141888/s 1108 MiB/s 0 0 00:08:21.590 ==================================================================================== 00:08:21.590 Total 141888/s 554 MiB/s 0 0' 00:08:21.590 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.590 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.590 12:29:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:21.590 12:29:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:21.590 12:29:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.590 12:29:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.590 12:29:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.590 12:29:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.590 12:29:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.590 12:29:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.590 12:29:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.590 12:29:30 -- accel/accel.sh@42 -- # jq -r . 00:08:21.590 [2024-05-15 12:29:30.142011] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:21.590 [2024-05-15 12:29:30.142156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60154 ] 00:08:21.590 [2024-05-15 12:29:30.305612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.590 [2024-05-15 12:29:30.542919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=0x1 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=0 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=software 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=32 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=32 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=1 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val=Yes 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:21.849 12:29:30 -- accel/accel.sh@21 -- # val= 00:08:21.849 12:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # IFS=: 00:08:21.849 12:29:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@21 -- # val= 00:08:23.749 12:29:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # IFS=: 00:08:23.749 12:29:32 -- accel/accel.sh@20 -- # read -r var val 00:08:23.749 12:29:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:23.749 12:29:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:23.749 12:29:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.749 00:08:23.749 real 0m5.047s 00:08:23.749 user 0m4.433s 00:08:23.749 sys 0m0.398s 00:08:23.749 12:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.749 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.749 ************************************ 00:08:23.749 END TEST accel_copy_crc32c_C2 00:08:23.749 ************************************ 00:08:23.749 12:29:32 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:23.749 12:29:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:23.749 12:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.749 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.749 ************************************ 00:08:23.749 START TEST accel_dualcast 00:08:23.749 ************************************ 00:08:23.749 12:29:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:23.749 12:29:32 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.749 12:29:32 -- accel/accel.sh@17 -- # local accel_module 00:08:23.749 12:29:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:23.749 12:29:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:23.749 12:29:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.749 12:29:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:23.749 12:29:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.749 12:29:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.749 12:29:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:23.749 12:29:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:23.749 12:29:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:23.749 12:29:32 -- accel/accel.sh@42 -- # jq -r . 00:08:23.749 [2024-05-15 12:29:32.726829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:23.749 [2024-05-15 12:29:32.727659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:08:24.007 [2024-05-15 12:29:32.902463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.265 [2024-05-15 12:29:33.139065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.809 12:29:35 -- accel/accel.sh@18 -- # out=' 00:08:26.809 SPDK Configuration: 00:08:26.809 Core mask: 0x1 00:08:26.809 00:08:26.809 Accel Perf Configuration: 00:08:26.809 Workload Type: dualcast 00:08:26.809 Transfer size: 4096 bytes 00:08:26.809 Vector count 1 00:08:26.809 Module: software 00:08:26.809 Queue depth: 32 00:08:26.809 Allocate depth: 32 00:08:26.809 # threads/core: 1 00:08:26.809 Run time: 1 seconds 00:08:26.809 Verify: Yes 00:08:26.809 00:08:26.809 Running for 1 seconds... 00:08:26.809 00:08:26.809 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:26.809 ------------------------------------------------------------------------------------ 00:08:26.809 0,0 272352/s 1063 MiB/s 0 0 00:08:26.809 ==================================================================================== 00:08:26.809 Total 272352/s 1063 MiB/s 0 0' 00:08:26.809 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:26.809 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:26.809 12:29:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:26.809 12:29:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:26.809 12:29:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:26.809 12:29:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:26.809 12:29:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.809 12:29:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.809 12:29:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:26.809 12:29:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:26.809 12:29:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:26.809 12:29:35 -- accel/accel.sh@42 -- # jq -r . 00:08:26.809 [2024-05-15 12:29:35.276117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:26.809 [2024-05-15 12:29:35.276285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:08:26.809 [2024-05-15 12:29:35.446204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.809 [2024-05-15 12:29:35.714543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=0x1 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=dualcast 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=software 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@23 -- # accel_module=software 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=32 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=32 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=1 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val=Yes 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:27.067 12:29:35 -- accel/accel.sh@21 -- # val= 00:08:27.067 12:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # IFS=: 00:08:27.067 12:29:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@21 -- # val= 00:08:28.965 12:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # IFS=: 00:08:28.965 12:29:37 -- accel/accel.sh@20 -- # read -r var val 00:08:28.965 12:29:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:28.965 12:29:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:28.965 12:29:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.965 00:08:28.965 real 0m5.110s 00:08:28.965 user 0m4.514s 00:08:28.965 sys 0m0.383s 00:08:28.965 ************************************ 00:08:28.965 END TEST accel_dualcast 00:08:28.965 ************************************ 00:08:28.965 12:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.965 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 12:29:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:28.965 12:29:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:28.965 12:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.965 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:08:28.965 ************************************ 00:08:28.965 START TEST accel_compare 00:08:28.965 ************************************ 00:08:28.965 12:29:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:28.965 12:29:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.965 12:29:37 -- accel/accel.sh@17 -- # local accel_module 00:08:28.965 12:29:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:28.965 12:29:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:28.965 12:29:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.965 12:29:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.965 12:29:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.965 12:29:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.965 12:29:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.965 12:29:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.965 12:29:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.965 12:29:37 -- accel/accel.sh@42 -- # jq -r . 00:08:28.965 [2024-05-15 12:29:37.890811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:28.965 [2024-05-15 12:29:37.891003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60273 ] 00:08:29.222 [2024-05-15 12:29:38.063125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.479 [2024-05-15 12:29:38.303296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.378 12:29:40 -- accel/accel.sh@18 -- # out=' 00:08:31.378 SPDK Configuration: 00:08:31.378 Core mask: 0x1 00:08:31.378 00:08:31.378 Accel Perf Configuration: 00:08:31.378 Workload Type: compare 00:08:31.378 Transfer size: 4096 bytes 00:08:31.378 Vector count 1 00:08:31.378 Module: software 00:08:31.378 Queue depth: 32 00:08:31.378 Allocate depth: 32 00:08:31.378 # threads/core: 1 00:08:31.378 Run time: 1 seconds 00:08:31.378 Verify: Yes 00:08:31.378 00:08:31.378 Running for 1 seconds... 00:08:31.378 00:08:31.378 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:31.378 ------------------------------------------------------------------------------------ 00:08:31.378 0,0 366368/s 1431 MiB/s 0 0 00:08:31.378 ==================================================================================== 00:08:31.378 Total 366368/s 1431 MiB/s 0 0' 00:08:31.378 12:29:40 -- accel/accel.sh@20 -- # IFS=: 00:08:31.378 12:29:40 -- accel/accel.sh@20 -- # read -r var val 00:08:31.378 12:29:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:31.378 12:29:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:31.378 12:29:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.378 12:29:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.378 12:29:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.378 12:29:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.378 12:29:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.378 12:29:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.378 12:29:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.378 12:29:40 -- accel/accel.sh@42 -- # jq -r . 00:08:31.637 [2024-05-15 12:29:40.431549] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:31.637 [2024-05-15 12:29:40.431761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:08:31.637 [2024-05-15 12:29:40.618300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.895 [2024-05-15 12:29:40.856843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=0x1 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=compare 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=software 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@23 -- # accel_module=software 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=32 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=32 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=1 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val=Yes 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:32.154 12:29:41 -- accel/accel.sh@21 -- # val= 00:08:32.154 12:29:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # IFS=: 00:08:32.154 12:29:41 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@21 -- # val= 00:08:34.069 12:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # IFS=: 00:08:34.069 12:29:42 -- accel/accel.sh@20 -- # read -r var val 00:08:34.069 12:29:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:34.069 12:29:42 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:34.069 12:29:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.069 00:08:34.069 real 0m5.166s 00:08:34.069 user 0m4.565s 00:08:34.069 sys 0m0.390s 00:08:34.069 12:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.069 12:29:42 -- common/autotest_common.sh@10 -- # set +x 00:08:34.069 ************************************ 00:08:34.069 END TEST accel_compare 00:08:34.069 ************************************ 00:08:34.069 12:29:43 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:34.069 12:29:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:34.069 12:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.069 12:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:34.069 ************************************ 00:08:34.069 START TEST accel_xor 00:08:34.069 ************************************ 00:08:34.069 12:29:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:34.069 12:29:43 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.069 12:29:43 -- accel/accel.sh@17 -- # local accel_module 00:08:34.069 12:29:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:34.069 12:29:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:34.069 12:29:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.069 12:29:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.069 12:29:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.069 12:29:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.069 12:29:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.069 12:29:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.069 12:29:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.069 12:29:43 -- accel/accel.sh@42 -- # jq -r . 00:08:34.336 [2024-05-15 12:29:43.117012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:34.336 [2024-05-15 12:29:43.117240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60352 ] 00:08:34.336 [2024-05-15 12:29:43.304377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.596 [2024-05-15 12:29:43.546118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.128 12:29:45 -- accel/accel.sh@18 -- # out=' 00:08:37.128 SPDK Configuration: 00:08:37.128 Core mask: 0x1 00:08:37.128 00:08:37.128 Accel Perf Configuration: 00:08:37.128 Workload Type: xor 00:08:37.128 Source buffers: 2 00:08:37.128 Transfer size: 4096 bytes 00:08:37.128 Vector count 1 00:08:37.128 Module: software 00:08:37.128 Queue depth: 32 00:08:37.128 Allocate depth: 32 00:08:37.128 # threads/core: 1 00:08:37.128 Run time: 1 seconds 00:08:37.128 Verify: Yes 00:08:37.128 00:08:37.128 Running for 1 seconds... 00:08:37.128 00:08:37.128 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:37.128 ------------------------------------------------------------------------------------ 00:08:37.128 0,0 196704/s 768 MiB/s 0 0 00:08:37.128 ==================================================================================== 00:08:37.128 Total 196704/s 768 MiB/s 0 0' 00:08:37.128 12:29:45 -- accel/accel.sh@20 -- # IFS=: 00:08:37.128 12:29:45 -- accel/accel.sh@20 -- # read -r var val 00:08:37.128 12:29:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:37.128 12:29:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:37.128 12:29:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.128 12:29:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.128 12:29:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.128 12:29:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.128 12:29:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.128 12:29:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.128 12:29:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.128 12:29:45 -- accel/accel.sh@42 -- # jq -r . 00:08:37.128 [2024-05-15 12:29:45.689834] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:37.128 [2024-05-15 12:29:45.690038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60388 ] 00:08:37.128 [2024-05-15 12:29:45.866051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.128 [2024-05-15 12:29:46.107030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=0x1 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=xor 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=2 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=software 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@23 -- # accel_module=software 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=32 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=32 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=1 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val=Yes 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:37.403 12:29:46 -- accel/accel.sh@21 -- # val= 00:08:37.403 12:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # IFS=: 00:08:37.403 12:29:46 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.309 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.309 12:29:48 -- accel/accel.sh@21 -- # val= 00:08:39.309 12:29:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.310 12:29:48 -- accel/accel.sh@20 -- # IFS=: 00:08:39.310 12:29:48 -- accel/accel.sh@20 -- # read -r var val 00:08:39.310 12:29:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:39.310 12:29:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:39.310 12:29:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.310 00:08:39.310 real 0m5.151s 00:08:39.310 user 0m4.491s 00:08:39.310 sys 0m0.444s 00:08:39.310 12:29:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.310 12:29:48 -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 ************************************ 00:08:39.310 END TEST accel_xor 00:08:39.310 ************************************ 00:08:39.310 12:29:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:39.310 12:29:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:39.310 12:29:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.310 12:29:48 -- common/autotest_common.sh@10 -- # set +x 00:08:39.310 ************************************ 00:08:39.310 START TEST accel_xor 00:08:39.310 ************************************ 00:08:39.310 12:29:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:39.310 12:29:48 -- accel/accel.sh@16 -- # local accel_opc 00:08:39.310 12:29:48 -- accel/accel.sh@17 -- # local accel_module 00:08:39.310 12:29:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:39.310 12:29:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:39.310 12:29:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.310 12:29:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.310 12:29:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.310 12:29:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.310 12:29:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.310 12:29:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.310 12:29:48 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.310 12:29:48 -- accel/accel.sh@42 -- # jq -r . 00:08:39.310 [2024-05-15 12:29:48.299174] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:39.310 [2024-05-15 12:29:48.299346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60435 ] 00:08:39.568 [2024-05-15 12:29:48.474483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.825 [2024-05-15 12:29:48.717066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.367 12:29:50 -- accel/accel.sh@18 -- # out=' 00:08:42.367 SPDK Configuration: 00:08:42.367 Core mask: 0x1 00:08:42.367 00:08:42.367 Accel Perf Configuration: 00:08:42.367 Workload Type: xor 00:08:42.367 Source buffers: 3 00:08:42.367 Transfer size: 4096 bytes 00:08:42.367 Vector count 1 00:08:42.367 Module: software 00:08:42.367 Queue depth: 32 00:08:42.367 Allocate depth: 32 00:08:42.367 # threads/core: 1 00:08:42.367 Run time: 1 seconds 00:08:42.367 Verify: Yes 00:08:42.367 00:08:42.367 Running for 1 seconds... 00:08:42.367 00:08:42.367 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:42.367 ------------------------------------------------------------------------------------ 00:08:42.367 0,0 187200/s 731 MiB/s 0 0 00:08:42.367 ==================================================================================== 00:08:42.367 Total 187200/s 731 MiB/s 0 0' 00:08:42.367 12:29:50 -- accel/accel.sh@20 -- # IFS=: 00:08:42.367 12:29:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:42.367 12:29:50 -- accel/accel.sh@20 -- # read -r var val 00:08:42.367 12:29:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:42.367 12:29:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.367 12:29:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:42.367 12:29:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.367 12:29:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.367 12:29:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:42.367 12:29:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:42.367 12:29:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:42.367 12:29:50 -- accel/accel.sh@42 -- # jq -r . 00:08:42.367 [2024-05-15 12:29:50.908435] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:42.367 [2024-05-15 12:29:50.908592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60465 ] 00:08:42.367 [2024-05-15 12:29:51.071638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.367 [2024-05-15 12:29:51.327615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val=0x1 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val=xor 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val=3 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.653 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.653 12:29:51 -- accel/accel.sh@21 -- # val=software 00:08:42.653 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@23 -- # accel_module=software 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val=32 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val=32 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val=1 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val=Yes 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:42.654 12:29:51 -- accel/accel.sh@21 -- # val= 00:08:42.654 12:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # IFS=: 00:08:42.654 12:29:51 -- accel/accel.sh@20 -- # read -r var val 00:08:44.555 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.555 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.555 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.555 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.555 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.555 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.555 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.555 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.556 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.556 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.556 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.556 12:29:53 -- accel/accel.sh@21 -- # val= 00:08:44.556 12:29:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # IFS=: 00:08:44.556 12:29:53 -- accel/accel.sh@20 -- # read -r var val 00:08:44.556 12:29:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:44.556 12:29:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:44.556 12:29:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.556 00:08:44.556 real 0m5.155s 00:08:44.556 user 0m4.558s 00:08:44.556 sys 0m0.381s 00:08:44.556 12:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.556 12:29:53 -- common/autotest_common.sh@10 -- # set +x 00:08:44.556 ************************************ 00:08:44.556 END TEST accel_xor 00:08:44.556 ************************************ 00:08:44.556 12:29:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:44.556 12:29:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:44.556 12:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.556 12:29:53 -- common/autotest_common.sh@10 -- # set +x 00:08:44.556 ************************************ 00:08:44.556 START TEST accel_dif_verify 00:08:44.556 ************************************ 00:08:44.556 12:29:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:44.556 12:29:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:44.556 12:29:53 -- accel/accel.sh@17 -- # local accel_module 00:08:44.556 12:29:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:44.556 12:29:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:44.556 12:29:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.556 12:29:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.556 12:29:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.556 12:29:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.556 12:29:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.556 12:29:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.556 12:29:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.556 12:29:53 -- accel/accel.sh@42 -- # jq -r . 00:08:44.556 [2024-05-15 12:29:53.495684] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:44.556 [2024-05-15 12:29:53.495857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:08:44.814 [2024-05-15 12:29:53.667463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.074 [2024-05-15 12:29:53.952576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.605 12:29:56 -- accel/accel.sh@18 -- # out=' 00:08:47.605 SPDK Configuration: 00:08:47.605 Core mask: 0x1 00:08:47.605 00:08:47.605 Accel Perf Configuration: 00:08:47.605 Workload Type: dif_verify 00:08:47.605 Vector size: 4096 bytes 00:08:47.605 Transfer size: 4096 bytes 00:08:47.605 Block size: 512 bytes 00:08:47.605 Metadata size: 8 bytes 00:08:47.605 Vector count 1 00:08:47.605 Module: software 00:08:47.605 Queue depth: 32 00:08:47.605 Allocate depth: 32 00:08:47.605 # threads/core: 1 00:08:47.605 Run time: 1 seconds 00:08:47.605 Verify: No 00:08:47.605 00:08:47.605 Running for 1 seconds... 00:08:47.605 00:08:47.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:47.605 ------------------------------------------------------------------------------------ 00:08:47.605 0,0 86720/s 344 MiB/s 0 0 00:08:47.605 ==================================================================================== 00:08:47.605 Total 86720/s 338 MiB/s 0 0' 00:08:47.605 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:47.605 12:29:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:47.605 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:47.605 12:29:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:47.605 12:29:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:47.605 12:29:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:47.605 12:29:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.605 12:29:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.605 12:29:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:47.605 12:29:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:47.605 12:29:56 -- accel/accel.sh@41 -- # local IFS=, 00:08:47.605 12:29:56 -- accel/accel.sh@42 -- # jq -r . 00:08:47.605 [2024-05-15 12:29:56.179178] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:47.605 [2024-05-15 12:29:56.179403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:08:47.605 [2024-05-15 12:29:56.363119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.863 [2024-05-15 12:29:56.675010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=0x1 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=dif_verify 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=software 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@23 -- # accel_module=software 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=32 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=32 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=1 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val=No 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:48.121 12:29:56 -- accel/accel.sh@21 -- # val= 00:08:48.121 12:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # IFS=: 00:08:48.121 12:29:56 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@21 -- # val= 00:08:50.024 12:29:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # IFS=: 00:08:50.024 12:29:58 -- accel/accel.sh@20 -- # read -r var val 00:08:50.024 12:29:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:50.024 12:29:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:50.024 12:29:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:50.024 00:08:50.024 real 0m5.411s 00:08:50.024 user 0m4.742s 00:08:50.024 sys 0m0.450s 00:08:50.024 12:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.024 12:29:58 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 ************************************ 00:08:50.024 END TEST accel_dif_verify 00:08:50.024 ************************************ 00:08:50.024 12:29:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:50.024 12:29:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:50.024 12:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.024 12:29:58 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 ************************************ 00:08:50.024 START TEST accel_dif_generate 00:08:50.024 ************************************ 00:08:50.024 12:29:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:50.024 12:29:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:50.024 12:29:58 -- accel/accel.sh@17 -- # local accel_module 00:08:50.024 12:29:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:50.025 12:29:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:50.025 12:29:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.025 12:29:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.025 12:29:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.025 12:29:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.025 12:29:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.025 12:29:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.025 12:29:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.025 12:29:58 -- accel/accel.sh@42 -- # jq -r . 00:08:50.025 [2024-05-15 12:29:58.957420] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:50.025 [2024-05-15 12:29:58.957617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:08:50.283 [2024-05-15 12:29:59.130217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.542 [2024-05-15 12:29:59.449190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.072 12:30:01 -- accel/accel.sh@18 -- # out=' 00:08:53.072 SPDK Configuration: 00:08:53.072 Core mask: 0x1 00:08:53.072 00:08:53.072 Accel Perf Configuration: 00:08:53.072 Workload Type: dif_generate 00:08:53.072 Vector size: 4096 bytes 00:08:53.072 Transfer size: 4096 bytes 00:08:53.072 Block size: 512 bytes 00:08:53.072 Metadata size: 8 bytes 00:08:53.072 Vector count 1 00:08:53.072 Module: software 00:08:53.072 Queue depth: 32 00:08:53.072 Allocate depth: 32 00:08:53.072 # threads/core: 1 00:08:53.072 Run time: 1 seconds 00:08:53.072 Verify: No 00:08:53.072 00:08:53.072 Running for 1 seconds... 00:08:53.072 00:08:53.072 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:53.072 ------------------------------------------------------------------------------------ 00:08:53.072 0,0 103904/s 412 MiB/s 0 0 00:08:53.072 ==================================================================================== 00:08:53.072 Total 103904/s 405 MiB/s 0 0' 00:08:53.072 12:30:01 -- accel/accel.sh@20 -- # IFS=: 00:08:53.072 12:30:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:53.072 12:30:01 -- accel/accel.sh@20 -- # read -r var val 00:08:53.072 12:30:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:53.072 12:30:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.072 12:30:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:53.072 12:30:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.072 12:30:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.072 12:30:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:53.072 12:30:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:53.072 12:30:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:53.072 12:30:01 -- accel/accel.sh@42 -- # jq -r . 00:08:53.072 [2024-05-15 12:30:01.590682] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:53.072 [2024-05-15 12:30:01.590860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:08:53.072 [2024-05-15 12:30:01.759352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.072 [2024-05-15 12:30:02.059356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=0x1 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=dif_generate 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=software 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@23 -- # accel_module=software 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=32 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=32 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=1 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val=No 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:53.330 12:30:02 -- accel/accel.sh@21 -- # val= 00:08:53.330 12:30:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # IFS=: 00:08:53.330 12:30:02 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@21 -- # val= 00:08:55.284 12:30:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # IFS=: 00:08:55.284 12:30:04 -- accel/accel.sh@20 -- # read -r var val 00:08:55.284 12:30:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:55.284 12:30:04 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:55.284 12:30:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:55.284 ************************************ 00:08:55.284 END TEST accel_dif_generate 00:08:55.284 ************************************ 00:08:55.284 00:08:55.284 real 0m5.286s 00:08:55.284 user 0m4.663s 00:08:55.284 sys 0m0.404s 00:08:55.284 12:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.284 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 12:30:04 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:55.284 12:30:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:55.284 12:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.284 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.284 ************************************ 00:08:55.284 START TEST accel_dif_generate_copy 00:08:55.284 ************************************ 00:08:55.284 12:30:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:55.284 12:30:04 -- accel/accel.sh@16 -- # local accel_opc 00:08:55.284 12:30:04 -- accel/accel.sh@17 -- # local accel_module 00:08:55.284 12:30:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:55.284 12:30:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:55.284 12:30:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:55.284 12:30:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:55.284 12:30:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.284 12:30:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.284 12:30:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:55.284 12:30:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:55.284 12:30:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:55.284 12:30:04 -- accel/accel.sh@42 -- # jq -r . 00:08:55.561 [2024-05-15 12:30:04.283471] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:55.561 [2024-05-15 12:30:04.283627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60669 ] 00:08:55.561 [2024-05-15 12:30:04.447772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.818 [2024-05-15 12:30:04.820316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.346 12:30:06 -- accel/accel.sh@18 -- # out=' 00:08:58.346 SPDK Configuration: 00:08:58.346 Core mask: 0x1 00:08:58.346 00:08:58.346 Accel Perf Configuration: 00:08:58.346 Workload Type: dif_generate_copy 00:08:58.346 Vector size: 4096 bytes 00:08:58.346 Transfer size: 4096 bytes 00:08:58.346 Vector count 1 00:08:58.346 Module: software 00:08:58.346 Queue depth: 32 00:08:58.346 Allocate depth: 32 00:08:58.346 # threads/core: 1 00:08:58.346 Run time: 1 seconds 00:08:58.346 Verify: No 00:08:58.346 00:08:58.346 Running for 1 seconds... 00:08:58.346 00:08:58.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:58.346 ------------------------------------------------------------------------------------ 00:08:58.346 0,0 79104/s 313 MiB/s 0 0 00:08:58.346 ==================================================================================== 00:08:58.346 Total 79104/s 309 MiB/s 0 0' 00:08:58.346 12:30:06 -- accel/accel.sh@20 -- # IFS=: 00:08:58.346 12:30:06 -- accel/accel.sh@20 -- # read -r var val 00:08:58.346 12:30:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:58.346 12:30:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:58.346 12:30:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:58.346 12:30:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:58.346 12:30:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.346 12:30:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.346 12:30:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:58.346 12:30:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:58.346 12:30:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:58.346 12:30:06 -- accel/accel.sh@42 -- # jq -r . 00:08:58.346 [2024-05-15 12:30:06.996375] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:58.346 [2024-05-15 12:30:06.996554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:08:58.346 [2024-05-15 12:30:07.162634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.604 [2024-05-15 12:30:07.405037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val=0x1 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.862 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.862 12:30:07 -- accel/accel.sh@21 -- # val=software 00:08:58.862 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.862 12:30:07 -- accel/accel.sh@23 -- # accel_module=software 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val=32 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val=32 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val=1 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val=No 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:08:58.863 12:30:07 -- accel/accel.sh@21 -- # val= 00:08:58.863 12:30:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # IFS=: 00:08:58.863 12:30:07 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@21 -- # val= 00:09:00.762 12:30:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # IFS=: 00:09:00.762 12:30:09 -- accel/accel.sh@20 -- # read -r var val 00:09:00.762 12:30:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:00.762 12:30:09 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:09:00.762 12:30:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.762 00:09:00.762 real 0m5.269s 00:09:00.762 user 0m4.622s 00:09:00.762 sys 0m0.426s 00:09:00.762 ************************************ 00:09:00.762 END TEST accel_dif_generate_copy 00:09:00.762 12:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.762 12:30:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.762 ************************************ 00:09:00.762 12:30:09 -- accel/accel.sh@107 -- # [[ y == y ]] 00:09:00.762 12:30:09 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.762 12:30:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:00.762 12:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.762 12:30:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.762 ************************************ 00:09:00.762 START TEST accel_comp 00:09:00.762 ************************************ 00:09:00.762 12:30:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.762 12:30:09 -- accel/accel.sh@16 -- # local accel_opc 00:09:00.762 12:30:09 -- accel/accel.sh@17 -- # local accel_module 00:09:00.762 12:30:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.762 12:30:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.762 12:30:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:00.762 12:30:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:00.762 12:30:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.762 12:30:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.762 12:30:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:00.762 12:30:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:00.762 12:30:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:00.762 12:30:09 -- accel/accel.sh@42 -- # jq -r . 00:09:00.762 [2024-05-15 12:30:09.599246] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:00.762 [2024-05-15 12:30:09.599389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60753 ] 00:09:00.762 [2024-05-15 12:30:09.768815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.328 [2024-05-15 12:30:10.050283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.229 12:30:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:03.229 00:09:03.229 SPDK Configuration: 00:09:03.229 Core mask: 0x1 00:09:03.229 00:09:03.229 Accel Perf Configuration: 00:09:03.229 Workload Type: compress 00:09:03.229 Transfer size: 4096 bytes 00:09:03.229 Vector count 1 00:09:03.229 Module: software 00:09:03.229 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.229 Queue depth: 32 00:09:03.229 Allocate depth: 32 00:09:03.229 # threads/core: 1 00:09:03.229 Run time: 1 seconds 00:09:03.229 Verify: No 00:09:03.229 00:09:03.229 Running for 1 seconds... 00:09:03.229 00:09:03.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:03.229 ------------------------------------------------------------------------------------ 00:09:03.229 0,0 45312/s 188 MiB/s 0 0 00:09:03.229 ==================================================================================== 00:09:03.229 Total 45312/s 177 MiB/s 0 0' 00:09:03.229 12:30:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.229 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:03.229 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:03.229 12:30:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:03.229 12:30:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.229 12:30:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:03.229 12:30:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.229 12:30:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.229 12:30:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:03.229 12:30:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:03.229 12:30:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:03.229 12:30:12 -- accel/accel.sh@42 -- # jq -r . 00:09:03.229 [2024-05-15 12:30:12.181101] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:03.229 [2024-05-15 12:30:12.181298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:09:03.487 [2024-05-15 12:30:12.355429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.745 [2024-05-15 12:30:12.603983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.001 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=0x1 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=compress 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=software 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=32 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=32 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=1 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val=No 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:04.002 12:30:12 -- accel/accel.sh@21 -- # val= 00:09:04.002 12:30:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # IFS=: 00:09:04.002 12:30:12 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@21 -- # val= 00:09:05.898 12:30:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # IFS=: 00:09:05.898 12:30:14 -- accel/accel.sh@20 -- # read -r var val 00:09:05.898 12:30:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:05.898 12:30:14 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:09:05.898 ************************************ 00:09:05.898 END TEST accel_comp 00:09:05.898 ************************************ 00:09:05.898 12:30:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:05.898 00:09:05.898 real 0m5.139s 00:09:05.898 user 0m4.508s 00:09:05.898 sys 0m0.411s 00:09:05.898 12:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.898 12:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:05.898 12:30:14 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:05.898 12:30:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:05.899 12:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.899 12:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:05.899 ************************************ 00:09:05.899 START TEST accel_decomp 00:09:05.899 ************************************ 00:09:05.899 12:30:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:05.899 12:30:14 -- accel/accel.sh@16 -- # local accel_opc 00:09:05.899 12:30:14 -- accel/accel.sh@17 -- # local accel_module 00:09:05.899 12:30:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:05.899 12:30:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:05.899 12:30:14 -- accel/accel.sh@12 -- # build_accel_config 00:09:05.899 12:30:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:05.899 12:30:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.899 12:30:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.899 12:30:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:05.899 12:30:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:05.899 12:30:14 -- accel/accel.sh@41 -- # local IFS=, 00:09:05.899 12:30:14 -- accel/accel.sh@42 -- # jq -r . 00:09:05.899 [2024-05-15 12:30:14.789051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:05.899 [2024-05-15 12:30:14.789235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:09:06.158 [2024-05-15 12:30:14.951571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.415 [2024-05-15 12:30:15.205713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.315 12:30:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:08.315 00:09:08.315 SPDK Configuration: 00:09:08.315 Core mask: 0x1 00:09:08.315 00:09:08.315 Accel Perf Configuration: 00:09:08.315 Workload Type: decompress 00:09:08.315 Transfer size: 4096 bytes 00:09:08.315 Vector count 1 00:09:08.315 Module: software 00:09:08.315 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:08.315 Queue depth: 32 00:09:08.315 Allocate depth: 32 00:09:08.315 # threads/core: 1 00:09:08.315 Run time: 1 seconds 00:09:08.315 Verify: Yes 00:09:08.315 00:09:08.315 Running for 1 seconds... 00:09:08.315 00:09:08.315 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:08.315 ------------------------------------------------------------------------------------ 00:09:08.315 0,0 60768/s 112 MiB/s 0 0 00:09:08.315 ==================================================================================== 00:09:08.315 Total 60768/s 237 MiB/s 0 0' 00:09:08.315 12:30:17 -- accel/accel.sh@20 -- # IFS=: 00:09:08.315 12:30:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.315 12:30:17 -- accel/accel.sh@20 -- # read -r var val 00:09:08.315 12:30:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.315 12:30:17 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.315 12:30:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.315 12:30:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.315 12:30:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.315 12:30:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.315 12:30:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.315 12:30:17 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.315 12:30:17 -- accel/accel.sh@42 -- # jq -r . 00:09:08.574 [2024-05-15 12:30:17.340260] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:08.574 [2024-05-15 12:30:17.340427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:09:08.574 [2024-05-15 12:30:17.515716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.832 [2024-05-15 12:30:17.818122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.090 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.090 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.090 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.090 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.090 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.090 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.090 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.090 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.090 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=0x1 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=decompress 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=software 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@23 -- # accel_module=software 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=32 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=32 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=1 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val=Yes 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:09.091 12:30:18 -- accel/accel.sh@21 -- # val= 00:09:09.091 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:09:09.091 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 12:30:19 -- accel/accel.sh@21 -- # val= 00:09:10.990 12:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # IFS=: 00:09:10.990 12:30:19 -- accel/accel.sh@20 -- # read -r var val 00:09:10.990 ************************************ 00:09:10.990 END TEST accel_decomp 00:09:10.990 ************************************ 00:09:10.990 12:30:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:10.990 12:30:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:10.990 12:30:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:10.990 00:09:10.990 real 0m5.169s 00:09:10.990 user 0m4.559s 00:09:10.990 sys 0m0.395s 00:09:10.990 12:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.990 12:30:19 -- common/autotest_common.sh@10 -- # set +x 00:09:10.990 12:30:19 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:10.990 12:30:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:10.990 12:30:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.990 12:30:19 -- common/autotest_common.sh@10 -- # set +x 00:09:10.990 ************************************ 00:09:10.990 START TEST accel_decmop_full 00:09:10.990 ************************************ 00:09:10.990 12:30:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:10.990 12:30:19 -- accel/accel.sh@16 -- # local accel_opc 00:09:10.990 12:30:19 -- accel/accel.sh@17 -- # local accel_module 00:09:10.990 12:30:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:10.990 12:30:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:10.990 12:30:19 -- accel/accel.sh@12 -- # build_accel_config 00:09:10.990 12:30:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:10.990 12:30:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.990 12:30:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.990 12:30:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:10.990 12:30:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:10.990 12:30:19 -- accel/accel.sh@41 -- # local IFS=, 00:09:10.990 12:30:19 -- accel/accel.sh@42 -- # jq -r . 00:09:11.248 [2024-05-15 12:30:20.008909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:11.249 [2024-05-15 12:30:20.009832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:09:11.249 [2024-05-15 12:30:20.186165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.507 [2024-05-15 12:30:20.428046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.050 12:30:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:14.050 00:09:14.050 SPDK Configuration: 00:09:14.050 Core mask: 0x1 00:09:14.050 00:09:14.050 Accel Perf Configuration: 00:09:14.050 Workload Type: decompress 00:09:14.050 Transfer size: 111250 bytes 00:09:14.050 Vector count 1 00:09:14.050 Module: software 00:09:14.050 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:14.050 Queue depth: 32 00:09:14.050 Allocate depth: 32 00:09:14.050 # threads/core: 1 00:09:14.050 Run time: 1 seconds 00:09:14.050 Verify: Yes 00:09:14.050 00:09:14.050 Running for 1 seconds... 00:09:14.050 00:09:14.050 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:14.050 ------------------------------------------------------------------------------------ 00:09:14.050 0,0 4352/s 179 MiB/s 0 0 00:09:14.050 ==================================================================================== 00:09:14.050 Total 4352/s 461 MiB/s 0 0' 00:09:14.050 12:30:22 -- accel/accel.sh@20 -- # IFS=: 00:09:14.050 12:30:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:14.050 12:30:22 -- accel/accel.sh@20 -- # read -r var val 00:09:14.050 12:30:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:14.050 12:30:22 -- accel/accel.sh@12 -- # build_accel_config 00:09:14.050 12:30:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:14.050 12:30:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.050 12:30:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.050 12:30:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:14.050 12:30:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:14.050 12:30:22 -- accel/accel.sh@41 -- # local IFS=, 00:09:14.050 12:30:22 -- accel/accel.sh@42 -- # jq -r . 00:09:14.050 [2024-05-15 12:30:22.596268] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:14.050 [2024-05-15 12:30:22.596432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60935 ] 00:09:14.050 [2024-05-15 12:30:22.765238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.050 [2024-05-15 12:30:23.004740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val=0x1 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.317 12:30:23 -- accel/accel.sh@21 -- # val=decompress 00:09:14.317 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.317 12:30:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:14.317 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=software 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@23 -- # accel_module=software 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=32 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=32 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=1 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val=Yes 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:14.318 12:30:23 -- accel/accel.sh@21 -- # val= 00:09:14.318 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:09:14.318 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@21 -- # val= 00:09:16.221 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:09:16.221 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:09:16.221 12:30:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:16.221 12:30:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:16.221 12:30:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.221 00:09:16.221 real 0m5.144s 00:09:16.221 user 0m4.537s 00:09:16.221 sys 0m0.389s 00:09:16.221 ************************************ 00:09:16.221 END TEST accel_decmop_full 00:09:16.221 ************************************ 00:09:16.221 12:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.221 12:30:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.221 12:30:25 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:16.221 12:30:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:16.221 12:30:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:16.222 12:30:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.222 ************************************ 00:09:16.222 START TEST accel_decomp_mcore 00:09:16.222 ************************************ 00:09:16.222 12:30:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:16.222 12:30:25 -- accel/accel.sh@16 -- # local accel_opc 00:09:16.222 12:30:25 -- accel/accel.sh@17 -- # local accel_module 00:09:16.222 12:30:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:16.222 12:30:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:16.222 12:30:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.222 12:30:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:16.222 12:30:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.222 12:30:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.222 12:30:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:16.222 12:30:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:16.222 12:30:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:16.222 12:30:25 -- accel/accel.sh@42 -- # jq -r . 00:09:16.222 [2024-05-15 12:30:25.196377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:16.222 [2024-05-15 12:30:25.196560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60987 ] 00:09:16.480 [2024-05-15 12:30:25.361855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.738 [2024-05-15 12:30:25.639849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.738 [2024-05-15 12:30:25.639912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.738 [2024-05-15 12:30:25.640003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.738 [2024-05-15 12:30:25.640025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.267 12:30:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:19.267 00:09:19.267 SPDK Configuration: 00:09:19.267 Core mask: 0xf 00:09:19.267 00:09:19.267 Accel Perf Configuration: 00:09:19.267 Workload Type: decompress 00:09:19.267 Transfer size: 4096 bytes 00:09:19.267 Vector count 1 00:09:19.267 Module: software 00:09:19.267 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:19.267 Queue depth: 32 00:09:19.267 Allocate depth: 32 00:09:19.267 # threads/core: 1 00:09:19.267 Run time: 1 seconds 00:09:19.267 Verify: Yes 00:09:19.267 00:09:19.267 Running for 1 seconds... 00:09:19.267 00:09:19.267 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:19.267 ------------------------------------------------------------------------------------ 00:09:19.267 0,0 53376/s 98 MiB/s 0 0 00:09:19.267 3,0 48256/s 88 MiB/s 0 0 00:09:19.267 2,0 55136/s 101 MiB/s 0 0 00:09:19.267 1,0 54400/s 100 MiB/s 0 0 00:09:19.267 ==================================================================================== 00:09:19.267 Total 211168/s 824 MiB/s 0 0' 00:09:19.267 12:30:27 -- accel/accel.sh@20 -- # IFS=: 00:09:19.267 12:30:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:19.267 12:30:27 -- accel/accel.sh@20 -- # read -r var val 00:09:19.267 12:30:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:19.267 12:30:27 -- accel/accel.sh@12 -- # build_accel_config 00:09:19.267 12:30:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:19.267 12:30:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.267 12:30:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.267 12:30:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:19.267 12:30:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:19.267 12:30:27 -- accel/accel.sh@41 -- # local IFS=, 00:09:19.267 12:30:27 -- accel/accel.sh@42 -- # jq -r . 00:09:19.267 [2024-05-15 12:30:27.867626] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:19.267 [2024-05-15 12:30:27.867784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61017 ] 00:09:19.267 [2024-05-15 12:30:28.043751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.526 [2024-05-15 12:30:28.301747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.526 [2024-05-15 12:30:28.301910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.526 [2024-05-15 12:30:28.302020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.526 [2024-05-15 12:30:28.302839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=0xf 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=decompress 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=software 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@23 -- # accel_module=software 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=32 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=32 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=1 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val=Yes 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:19.526 12:30:28 -- accel/accel.sh@21 -- # val= 00:09:19.526 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:09:19.526 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@21 -- # val= 00:09:21.431 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:09:21.431 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:09:21.431 12:30:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:21.431 12:30:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:21.431 12:30:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:21.431 00:09:21.431 real 0m5.292s 00:09:21.431 user 0m15.107s 00:09:21.431 sys 0m0.449s 00:09:21.431 12:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.689 12:30:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.689 ************************************ 00:09:21.689 END TEST accel_decomp_mcore 00:09:21.689 ************************************ 00:09:21.689 12:30:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:21.689 12:30:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:21.689 12:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.689 12:30:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.689 ************************************ 00:09:21.689 START TEST accel_decomp_full_mcore 00:09:21.689 ************************************ 00:09:21.689 12:30:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:21.689 12:30:30 -- accel/accel.sh@16 -- # local accel_opc 00:09:21.689 12:30:30 -- accel/accel.sh@17 -- # local accel_module 00:09:21.689 12:30:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:21.689 12:30:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:21.689 12:30:30 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.689 12:30:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.689 12:30:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.689 12:30:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.689 12:30:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.689 12:30:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.689 12:30:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.689 12:30:30 -- accel/accel.sh@42 -- # jq -r . 00:09:21.689 [2024-05-15 12:30:30.542014] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:21.689 [2024-05-15 12:30:30.542173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:09:21.947 [2024-05-15 12:30:30.720210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.205 [2024-05-15 12:30:30.964772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.205 [2024-05-15 12:30:30.965279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.205 [2024-05-15 12:30:30.965460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.205 [2024-05-15 12:30:30.965597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.733 12:30:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:24.733 00:09:24.733 SPDK Configuration: 00:09:24.733 Core mask: 0xf 00:09:24.733 00:09:24.733 Accel Perf Configuration: 00:09:24.733 Workload Type: decompress 00:09:24.733 Transfer size: 111250 bytes 00:09:24.733 Vector count 1 00:09:24.733 Module: software 00:09:24.733 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:24.733 Queue depth: 32 00:09:24.733 Allocate depth: 32 00:09:24.733 # threads/core: 1 00:09:24.733 Run time: 1 seconds 00:09:24.733 Verify: Yes 00:09:24.733 00:09:24.733 Running for 1 seconds... 00:09:24.733 00:09:24.733 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:24.733 ------------------------------------------------------------------------------------ 00:09:24.733 0,0 4352/s 179 MiB/s 0 0 00:09:24.733 3,0 4352/s 179 MiB/s 0 0 00:09:24.733 2,0 4384/s 181 MiB/s 0 0 00:09:24.733 1,0 4160/s 171 MiB/s 0 0 00:09:24.733 ==================================================================================== 00:09:24.733 Total 17248/s 1829 MiB/s 0 0' 00:09:24.733 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.733 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.733 12:30:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:24.733 12:30:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:24.733 12:30:33 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.733 12:30:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.733 12:30:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.733 12:30:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.733 12:30:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.733 12:30:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.733 12:30:33 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.733 12:30:33 -- accel/accel.sh@42 -- # jq -r . 00:09:24.733 [2024-05-15 12:30:33.214181] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:24.733 [2024-05-15 12:30:33.214438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61106 ] 00:09:24.733 [2024-05-15 12:30:33.423338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.734 [2024-05-15 12:30:33.713092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.734 [2024-05-15 12:30:33.713242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.734 [2024-05-15 12:30:33.713325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.734 [2024-05-15 12:30:33.714761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val=0xf 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val=decompress 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.991 12:30:33 -- accel/accel.sh@21 -- # val=software 00:09:24.991 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.991 12:30:33 -- accel/accel.sh@23 -- # accel_module=software 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.991 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val=32 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val=32 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val=1 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val=Yes 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:24.992 12:30:33 -- accel/accel.sh@21 -- # val= 00:09:24.992 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:09:24.992 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@21 -- # val= 00:09:26.893 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:09:26.893 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:09:26.893 12:30:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:26.893 12:30:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:26.893 12:30:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.893 00:09:26.893 real 0m5.360s 00:09:26.893 user 0m7.660s 00:09:26.893 sys 0m0.235s 00:09:26.893 12:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.893 ************************************ 00:09:26.893 END TEST accel_decomp_full_mcore 00:09:26.893 ************************************ 00:09:26.893 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:09:26.893 12:30:35 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:26.893 12:30:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:26.893 12:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.893 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:09:26.893 ************************************ 00:09:26.893 START TEST accel_decomp_mthread 00:09:26.893 ************************************ 00:09:26.893 12:30:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:26.893 12:30:35 -- accel/accel.sh@16 -- # local accel_opc 00:09:26.893 12:30:35 -- accel/accel.sh@17 -- # local accel_module 00:09:26.893 12:30:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:26.893 12:30:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:26.893 12:30:35 -- accel/accel.sh@12 -- # build_accel_config 00:09:26.893 12:30:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:26.893 12:30:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.893 12:30:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.893 12:30:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:26.893 12:30:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:26.893 12:30:35 -- accel/accel.sh@41 -- # local IFS=, 00:09:26.893 12:30:35 -- accel/accel.sh@42 -- # jq -r . 00:09:27.152 [2024-05-15 12:30:35.937333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:27.152 [2024-05-15 12:30:35.937480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:09:27.152 [2024-05-15 12:30:36.099354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.411 [2024-05-15 12:30:36.337619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.972 12:30:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:29.972 00:09:29.972 SPDK Configuration: 00:09:29.972 Core mask: 0x1 00:09:29.972 00:09:29.972 Accel Perf Configuration: 00:09:29.972 Workload Type: decompress 00:09:29.972 Transfer size: 4096 bytes 00:09:29.972 Vector count 1 00:09:29.972 Module: software 00:09:29.972 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:29.972 Queue depth: 32 00:09:29.972 Allocate depth: 32 00:09:29.972 # threads/core: 2 00:09:29.972 Run time: 1 seconds 00:09:29.972 Verify: Yes 00:09:29.972 00:09:29.972 Running for 1 seconds... 00:09:29.972 00:09:29.972 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:29.972 ------------------------------------------------------------------------------------ 00:09:29.972 0,1 31168/s 57 MiB/s 0 0 00:09:29.972 0,0 31072/s 57 MiB/s 0 0 00:09:29.972 ==================================================================================== 00:09:29.972 Total 62240/s 243 MiB/s 0 0' 00:09:29.972 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:09:29.972 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:09:29.972 12:30:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:29.972 12:30:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:29.972 12:30:38 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.972 12:30:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.972 12:30:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.972 12:30:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.972 12:30:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.972 12:30:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.972 12:30:38 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.972 12:30:38 -- accel/accel.sh@42 -- # jq -r . 00:09:29.972 [2024-05-15 12:30:38.468600] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:29.972 [2024-05-15 12:30:38.468770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61187 ] 00:09:29.972 [2024-05-15 12:30:38.642148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.972 [2024-05-15 12:30:38.878515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=0x1 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=decompress 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=software 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@23 -- # accel_module=software 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=32 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=32 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=2 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val=Yes 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:30.231 12:30:39 -- accel/accel.sh@21 -- # val= 00:09:30.231 12:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # IFS=: 00:09:30.231 12:30:39 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.133 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.133 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.133 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.134 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.134 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.134 12:30:40 -- accel/accel.sh@21 -- # val= 00:09:32.134 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.134 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:09:32.134 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:09:32.134 ************************************ 00:09:32.134 END TEST accel_decomp_mthread 00:09:32.134 ************************************ 00:09:32.134 12:30:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:32.134 12:30:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:32.134 12:30:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.134 00:09:32.134 real 0m5.095s 00:09:32.134 user 0m4.496s 00:09:32.134 sys 0m0.379s 00:09:32.134 12:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.134 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:09:32.134 12:30:41 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:32.134 12:30:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:32.134 12:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.134 12:30:41 -- common/autotest_common.sh@10 -- # set +x 00:09:32.134 ************************************ 00:09:32.134 START TEST accel_deomp_full_mthread 00:09:32.134 ************************************ 00:09:32.134 12:30:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:32.134 12:30:41 -- accel/accel.sh@16 -- # local accel_opc 00:09:32.134 12:30:41 -- accel/accel.sh@17 -- # local accel_module 00:09:32.134 12:30:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:32.134 12:30:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:32.134 12:30:41 -- accel/accel.sh@12 -- # build_accel_config 00:09:32.134 12:30:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:32.134 12:30:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.134 12:30:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.134 12:30:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:32.134 12:30:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:32.134 12:30:41 -- accel/accel.sh@41 -- # local IFS=, 00:09:32.134 12:30:41 -- accel/accel.sh@42 -- # jq -r . 00:09:32.134 [2024-05-15 12:30:41.081214] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:32.134 [2024-05-15 12:30:41.081822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61228 ] 00:09:32.392 [2024-05-15 12:30:41.246087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.650 [2024-05-15 12:30:41.484881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.178 12:30:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:35.178 00:09:35.178 SPDK Configuration: 00:09:35.178 Core mask: 0x1 00:09:35.178 00:09:35.178 Accel Perf Configuration: 00:09:35.178 Workload Type: decompress 00:09:35.178 Transfer size: 111250 bytes 00:09:35.178 Vector count 1 00:09:35.178 Module: software 00:09:35.178 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:35.178 Queue depth: 32 00:09:35.178 Allocate depth: 32 00:09:35.178 # threads/core: 2 00:09:35.178 Run time: 1 seconds 00:09:35.178 Verify: Yes 00:09:35.178 00:09:35.178 Running for 1 seconds... 00:09:35.178 00:09:35.178 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:35.178 ------------------------------------------------------------------------------------ 00:09:35.178 0,1 1952/s 80 MiB/s 0 0 00:09:35.178 0,0 1952/s 80 MiB/s 0 0 00:09:35.178 ==================================================================================== 00:09:35.178 Total 3904/s 414 MiB/s 0 0' 00:09:35.178 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:09:35.178 12:30:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:35.178 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:09:35.178 12:30:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:35.178 12:30:43 -- accel/accel.sh@12 -- # build_accel_config 00:09:35.178 12:30:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:35.178 12:30:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:35.178 12:30:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:35.178 12:30:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:35.178 12:30:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:35.178 12:30:43 -- accel/accel.sh@41 -- # local IFS=, 00:09:35.178 12:30:43 -- accel/accel.sh@42 -- # jq -r . 00:09:35.178 [2024-05-15 12:30:43.656972] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:35.178 [2024-05-15 12:30:43.657121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61265 ] 00:09:35.178 [2024-05-15 12:30:43.822236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.178 [2024-05-15 12:30:44.061550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=0x1 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=decompress 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=software 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@23 -- # accel_module=software 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=32 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=32 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=2 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val=Yes 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:35.437 12:30:44 -- accel/accel.sh@21 -- # val= 00:09:35.437 12:30:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # IFS=: 00:09:35.437 12:30:44 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@21 -- # val= 00:09:37.336 12:30:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # IFS=: 00:09:37.336 12:30:46 -- accel/accel.sh@20 -- # read -r var val 00:09:37.336 12:30:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:37.336 12:30:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:37.336 12:30:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:37.336 00:09:37.336 real 0m5.139s 00:09:37.336 user 0m4.530s 00:09:37.336 sys 0m0.385s 00:09:37.336 12:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.336 ************************************ 00:09:37.336 END TEST accel_deomp_full_mthread 00:09:37.336 ************************************ 00:09:37.336 12:30:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 12:30:46 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:37.336 12:30:46 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:37.336 12:30:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:37.336 12:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.336 12:30:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 12:30:46 -- accel/accel.sh@129 -- # build_accel_config 00:09:37.336 12:30:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.336 12:30:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.336 12:30:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.336 12:30:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.336 12:30:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.336 12:30:46 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.336 12:30:46 -- accel/accel.sh@42 -- # jq -r . 00:09:37.336 ************************************ 00:09:37.336 START TEST accel_dif_functional_tests 00:09:37.336 ************************************ 00:09:37.336 12:30:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:37.336 [2024-05-15 12:30:46.321132] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:37.336 [2024-05-15 12:30:46.321313] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:09:37.594 [2024-05-15 12:30:46.496323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.852 [2024-05-15 12:30:46.738563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.852 [2024-05-15 12:30:46.738671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.852 [2024-05-15 12:30:46.738686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.110 00:09:38.110 00:09:38.110 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.110 http://cunit.sourceforge.net/ 00:09:38.110 00:09:38.110 00:09:38.110 Suite: accel_dif 00:09:38.110 Test: verify: DIF generated, GUARD check ...passed 00:09:38.110 Test: verify: DIF generated, APPTAG check ...passed 00:09:38.110 Test: verify: DIF generated, REFTAG check ...passed 00:09:38.110 Test: verify: DIF not generated, GUARD check ...passed 00:09:38.110 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 12:30:47.056095] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:38.110 [2024-05-15 12:30:47.056384] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:38.110 [2024-05-15 12:30:47.056506] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:38.110 passed 00:09:38.110 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 12:30:47.056653] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:38.110 [2024-05-15 12:30:47.056755] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:38.110 [2024-05-15 12:30:47.056962] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:38.110 passed 00:09:38.110 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:38.110 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 12:30:47.057287] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:38.110 passed 00:09:38.110 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:38.110 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:38.110 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:38.110 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:38.110 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 12:30:47.057791] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:38.110 passed 00:09:38.110 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:38.110 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:38.110 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:38.110 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:38.110 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:38.110 Test: generate copy: iovecs-len validate ...[2024-05-15 12:30:47.058700] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:38.110 passed 00:09:38.110 Test: generate copy: buffer alignment validate ...passed 00:09:38.110 00:09:38.110 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.110 suites 1 1 n/a 0 0 00:09:38.110 tests 20 20 20 0 0 00:09:38.111 asserts 204 204 204 0 n/a 00:09:38.111 00:09:38.111 Elapsed time = 0.007 seconds 00:09:39.486 00:09:39.486 real 0m1.964s 00:09:39.486 user 0m3.703s 00:09:39.486 sys 0m0.284s 00:09:39.486 ************************************ 00:09:39.486 END TEST accel_dif_functional_tests 00:09:39.486 ************************************ 00:09:39.486 12:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.486 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.486 00:09:39.486 real 1m54.258s 00:09:39.486 user 2m3.345s 00:09:39.486 sys 0m10.275s 00:09:39.486 12:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.486 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.486 ************************************ 00:09:39.486 END TEST accel 00:09:39.486 ************************************ 00:09:39.486 12:30:48 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:39.486 12:30:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:39.486 12:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.486 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.486 ************************************ 00:09:39.486 START TEST accel_rpc 00:09:39.486 ************************************ 00:09:39.486 12:30:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:39.486 * Looking for test storage... 00:09:39.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:39.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.486 12:30:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:39.486 12:30:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61394 00:09:39.486 12:30:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 61394 00:09:39.486 12:30:48 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:39.486 12:30:48 -- common/autotest_common.sh@819 -- # '[' -z 61394 ']' 00:09:39.486 12:30:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.486 12:30:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:39.486 12:30:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.486 12:30:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:39.486 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.486 [2024-05-15 12:30:48.461658] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:39.486 [2024-05-15 12:30:48.462127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61394 ] 00:09:39.744 [2024-05-15 12:30:48.633322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.002 [2024-05-15 12:30:48.866858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.002 [2024-05-15 12:30:48.867428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.575 12:30:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.575 12:30:49 -- common/autotest_common.sh@852 -- # return 0 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:40.575 12:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.575 12:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.575 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:09:40.575 ************************************ 00:09:40.575 START TEST accel_assign_opcode 00:09:40.575 ************************************ 00:09:40.575 12:30:49 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:40.575 12:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.575 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:09:40.575 [2024-05-15 12:30:49.456573] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:40.575 12:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:40.575 12:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.575 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:09:40.575 [2024-05-15 12:30:49.464495] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:40.575 12:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.575 12:30:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:40.575 12:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.575 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 12:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:41.510 12:30:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:41.510 12:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:41.510 12:30:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 12:30:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:41.510 12:30:50 -- accel/accel_rpc.sh@42 -- # grep software 00:09:41.510 12:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:41.510 software 00:09:41.510 ************************************ 00:09:41.510 END TEST accel_assign_opcode 00:09:41.510 ************************************ 00:09:41.510 00:09:41.510 real 0m0.837s 00:09:41.510 user 0m0.054s 00:09:41.510 sys 0m0.008s 00:09:41.510 12:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.510 12:30:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 12:30:50 -- accel/accel_rpc.sh@55 -- # killprocess 61394 00:09:41.510 12:30:50 -- common/autotest_common.sh@926 -- # '[' -z 61394 ']' 00:09:41.511 12:30:50 -- common/autotest_common.sh@930 -- # kill -0 61394 00:09:41.511 12:30:50 -- common/autotest_common.sh@931 -- # uname 00:09:41.511 12:30:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.511 12:30:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61394 00:09:41.511 killing process with pid 61394 00:09:41.511 12:30:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.511 12:30:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.511 12:30:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61394' 00:09:41.511 12:30:50 -- common/autotest_common.sh@945 -- # kill 61394 00:09:41.511 12:30:50 -- common/autotest_common.sh@950 -- # wait 61394 00:09:44.037 ************************************ 00:09:44.037 END TEST accel_rpc 00:09:44.037 ************************************ 00:09:44.037 00:09:44.037 real 0m4.418s 00:09:44.037 user 0m4.405s 00:09:44.037 sys 0m0.588s 00:09:44.037 12:30:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.038 12:30:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.038 12:30:52 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:44.038 12:30:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:44.038 12:30:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.038 12:30:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.038 ************************************ 00:09:44.038 START TEST app_cmdline 00:09:44.038 ************************************ 00:09:44.038 12:30:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:44.038 * Looking for test storage... 00:09:44.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:44.038 12:30:52 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:44.038 12:30:52 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61509 00:09:44.038 12:30:52 -- app/cmdline.sh@18 -- # waitforlisten 61509 00:09:44.038 12:30:52 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:44.038 12:30:52 -- common/autotest_common.sh@819 -- # '[' -z 61509 ']' 00:09:44.038 12:30:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.038 12:30:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:44.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.038 12:30:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.038 12:30:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:44.038 12:30:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.038 [2024-05-15 12:30:52.928098] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:44.038 [2024-05-15 12:30:52.928534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61509 ] 00:09:44.296 [2024-05-15 12:30:53.102895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.555 [2024-05-15 12:30:53.346164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.555 [2024-05-15 12:30:53.346633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.930 12:30:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.930 12:30:54 -- common/autotest_common.sh@852 -- # return 0 00:09:45.930 12:30:54 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:45.930 { 00:09:45.930 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:09:45.930 "fields": { 00:09:45.930 "major": 24, 00:09:45.930 "minor": 1, 00:09:45.930 "patch": 1, 00:09:45.930 "suffix": "-pre", 00:09:45.930 "commit": "36faa8c31" 00:09:45.930 } 00:09:45.930 } 00:09:45.930 12:30:54 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:45.930 12:30:54 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:45.930 12:30:54 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:45.930 12:30:54 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:45.930 12:30:54 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:45.930 12:30:54 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:45.930 12:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.930 12:30:54 -- app/cmdline.sh@26 -- # sort 00:09:45.930 12:30:54 -- common/autotest_common.sh@10 -- # set +x 00:09:45.930 12:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.930 12:30:54 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:45.930 12:30:54 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:45.930 12:30:54 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:45.930 12:30:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:45.930 12:30:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:45.930 12:30:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.930 12:30:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:45.930 12:30:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.930 12:30:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:45.930 12:30:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.930 12:30:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:45.930 12:30:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.930 12:30:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:45.930 12:30:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:46.189 request: 00:09:46.189 { 00:09:46.189 "method": "env_dpdk_get_mem_stats", 00:09:46.189 "req_id": 1 00:09:46.189 } 00:09:46.189 Got JSON-RPC error response 00:09:46.189 response: 00:09:46.189 { 00:09:46.189 "code": -32601, 00:09:46.189 "message": "Method not found" 00:09:46.189 } 00:09:46.189 12:30:55 -- common/autotest_common.sh@643 -- # es=1 00:09:46.189 12:30:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:46.189 12:30:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:46.189 12:30:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:46.189 12:30:55 -- app/cmdline.sh@1 -- # killprocess 61509 00:09:46.189 12:30:55 -- common/autotest_common.sh@926 -- # '[' -z 61509 ']' 00:09:46.189 12:30:55 -- common/autotest_common.sh@930 -- # kill -0 61509 00:09:46.189 12:30:55 -- common/autotest_common.sh@931 -- # uname 00:09:46.189 12:30:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.189 12:30:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61509 00:09:46.446 killing process with pid 61509 00:09:46.446 12:30:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.446 12:30:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.446 12:30:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61509' 00:09:46.446 12:30:55 -- common/autotest_common.sh@945 -- # kill 61509 00:09:46.446 12:30:55 -- common/autotest_common.sh@950 -- # wait 61509 00:09:49.013 00:09:49.013 real 0m4.997s 00:09:49.013 user 0m5.578s 00:09:49.013 sys 0m0.647s 00:09:49.013 12:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.013 ************************************ 00:09:49.013 END TEST app_cmdline 00:09:49.013 ************************************ 00:09:49.013 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.013 12:30:57 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:49.013 12:30:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.013 12:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.013 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.013 ************************************ 00:09:49.013 START TEST version 00:09:49.013 ************************************ 00:09:49.013 12:30:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:49.013 * Looking for test storage... 00:09:49.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:49.013 12:30:57 -- app/version.sh@17 -- # get_header_version major 00:09:49.013 12:30:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.013 12:30:57 -- app/version.sh@14 -- # tr -d '"' 00:09:49.013 12:30:57 -- app/version.sh@14 -- # cut -f2 00:09:49.013 12:30:57 -- app/version.sh@17 -- # major=24 00:09:49.013 12:30:57 -- app/version.sh@18 -- # get_header_version minor 00:09:49.013 12:30:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.013 12:30:57 -- app/version.sh@14 -- # cut -f2 00:09:49.013 12:30:57 -- app/version.sh@14 -- # tr -d '"' 00:09:49.013 12:30:57 -- app/version.sh@18 -- # minor=1 00:09:49.013 12:30:57 -- app/version.sh@19 -- # get_header_version patch 00:09:49.013 12:30:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.013 12:30:57 -- app/version.sh@14 -- # cut -f2 00:09:49.013 12:30:57 -- app/version.sh@14 -- # tr -d '"' 00:09:49.013 12:30:57 -- app/version.sh@19 -- # patch=1 00:09:49.013 12:30:57 -- app/version.sh@20 -- # get_header_version suffix 00:09:49.013 12:30:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.013 12:30:57 -- app/version.sh@14 -- # cut -f2 00:09:49.013 12:30:57 -- app/version.sh@14 -- # tr -d '"' 00:09:49.013 12:30:57 -- app/version.sh@20 -- # suffix=-pre 00:09:49.013 12:30:57 -- app/version.sh@22 -- # version=24.1 00:09:49.013 12:30:57 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:49.013 12:30:57 -- app/version.sh@25 -- # version=24.1.1 00:09:49.013 12:30:57 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:49.013 12:30:57 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:49.013 12:30:57 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:49.013 12:30:57 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:49.013 12:30:57 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:49.013 00:09:49.013 real 0m0.108s 00:09:49.013 user 0m0.063s 00:09:49.013 sys 0m0.066s 00:09:49.013 12:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.013 ************************************ 00:09:49.013 END TEST version 00:09:49.013 ************************************ 00:09:49.013 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.013 12:30:57 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:49.013 12:30:57 -- spdk/autotest.sh@204 -- # uname -s 00:09:49.013 12:30:57 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:49.013 12:30:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:49.013 12:30:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:49.013 12:30:57 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:09:49.013 12:30:57 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:49.013 12:30:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:49.013 12:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.013 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.013 ************************************ 00:09:49.013 START TEST blockdev_nvme 00:09:49.013 ************************************ 00:09:49.013 12:30:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:49.013 * Looking for test storage... 00:09:49.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:49.013 12:30:57 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:49.013 12:30:57 -- bdev/nbd_common.sh@6 -- # set -e 00:09:49.013 12:30:57 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:49.013 12:30:57 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:49.013 12:30:57 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:49.013 12:30:57 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:49.013 12:30:57 -- bdev/blockdev.sh@18 -- # : 00:09:49.013 12:30:57 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:49.013 12:30:57 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:49.013 12:30:57 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:49.013 12:30:57 -- bdev/blockdev.sh@672 -- # uname -s 00:09:49.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.013 12:30:57 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:49.013 12:30:57 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:49.013 12:30:57 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:09:49.013 12:30:57 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:49.013 12:30:57 -- bdev/blockdev.sh@682 -- # dek= 00:09:49.013 12:30:57 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:49.013 12:30:57 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:49.013 12:30:57 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:49.013 12:30:57 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:09:49.013 12:30:57 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:09:49.013 12:30:57 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:49.013 12:30:57 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61687 00:09:49.013 12:30:57 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:49.013 12:30:57 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:49.013 12:30:57 -- bdev/blockdev.sh@47 -- # waitforlisten 61687 00:09:49.013 12:30:57 -- common/autotest_common.sh@819 -- # '[' -z 61687 ']' 00:09:49.013 12:30:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.013 12:30:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.013 12:30:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.013 12:30:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.013 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.272 [2024-05-15 12:30:58.115666] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:49.272 [2024-05-15 12:30:58.115829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61687 ] 00:09:49.272 [2024-05-15 12:30:58.279389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.529 [2024-05-15 12:30:58.518657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:49.530 [2024-05-15 12:30:58.518925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.905 12:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.905 12:30:59 -- common/autotest_common.sh@852 -- # return 0 00:09:50.905 12:30:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:50.905 12:30:59 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:09:50.905 12:30:59 -- bdev/blockdev.sh@79 -- # local json 00:09:50.905 12:30:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:50.905 12:30:59 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:50.905 12:30:59 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:50.905 12:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:50.905 12:30:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.162 12:31:00 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:51.162 12:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.162 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.162 12:31:00 -- bdev/blockdev.sh@738 -- # cat 00:09:51.162 12:31:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:51.162 12:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.162 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.162 12:31:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:51.162 12:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.162 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.162 12:31:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:51.162 12:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.163 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.163 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.163 12:31:00 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:51.163 12:31:00 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:51.163 12:31:00 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:51.163 12:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.163 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.421 12:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.421 12:31:00 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:51.421 12:31:00 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:51.422 12:31:00 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4b917a1a-947c-4f07-8ec0-89bc1f4f6d76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4b917a1a-947c-4f07-8ec0-89bc1f4f6d76",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e23e6b53-6c48-4e2f-810c-f4d3e670d857"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e23e6b53-6c48-4e2f-810c-f4d3e670d857",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "66a01c45-af0b-4262-a008-bf6c60e9504f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66a01c45-af0b-4262-a008-bf6c60e9504f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "604501b7-584b-4c9b-af3c-ea28f2306783"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "604501b7-584b-4c9b-af3c-ea28f2306783",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d5845350-f1e5-4059-b386-9e0b8aadf6b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5845350-f1e5-4059-b386-9e0b8aadf6b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "47e5a58f-ce6f-438e-9651-3f187ff9ecc4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "47e5a58f-ce6f-438e-9651-3f187ff9ecc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:51.422 12:31:00 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:51.422 12:31:00 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:09:51.422 12:31:00 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:51.422 12:31:00 -- bdev/blockdev.sh@752 -- # killprocess 61687 00:09:51.422 12:31:00 -- common/autotest_common.sh@926 -- # '[' -z 61687 ']' 00:09:51.422 12:31:00 -- common/autotest_common.sh@930 -- # kill -0 61687 00:09:51.422 12:31:00 -- common/autotest_common.sh@931 -- # uname 00:09:51.422 12:31:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:51.422 12:31:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61687 00:09:51.422 killing process with pid 61687 00:09:51.422 12:31:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:51.422 12:31:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:51.422 12:31:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61687' 00:09:51.422 12:31:00 -- common/autotest_common.sh@945 -- # kill 61687 00:09:51.422 12:31:00 -- common/autotest_common.sh@950 -- # wait 61687 00:09:53.960 12:31:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:53.961 12:31:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:53.961 12:31:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:53.961 12:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.961 12:31:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.961 ************************************ 00:09:53.961 START TEST bdev_hello_world 00:09:53.961 ************************************ 00:09:53.961 12:31:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:53.961 [2024-05-15 12:31:02.517075] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:53.961 [2024-05-15 12:31:02.517551] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:09:53.961 [2024-05-15 12:31:02.695486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.961 [2024-05-15 12:31:02.932593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.895 [2024-05-15 12:31:03.559521] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:54.895 [2024-05-15 12:31:03.559611] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:54.895 [2024-05-15 12:31:03.559660] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:54.895 [2024-05-15 12:31:03.562714] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:54.895 [2024-05-15 12:31:03.563284] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:54.895 [2024-05-15 12:31:03.563325] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:54.895 [2024-05-15 12:31:03.563516] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:54.895 00:09:54.895 [2024-05-15 12:31:03.563548] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:55.828 00:09:55.828 real 0m2.189s 00:09:55.828 user 0m1.781s 00:09:55.828 sys 0m0.294s 00:09:55.828 12:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.828 ************************************ 00:09:55.828 END TEST bdev_hello_world 00:09:55.828 ************************************ 00:09:55.828 12:31:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.828 12:31:04 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:55.828 12:31:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:55.828 12:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.828 12:31:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.828 ************************************ 00:09:55.828 START TEST bdev_bounds 00:09:55.828 ************************************ 00:09:55.828 12:31:04 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:55.828 Process bdevio pid: 61832 00:09:55.828 12:31:04 -- bdev/blockdev.sh@288 -- # bdevio_pid=61832 00:09:55.828 12:31:04 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:55.828 12:31:04 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:55.828 12:31:04 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61832' 00:09:55.828 12:31:04 -- bdev/blockdev.sh@291 -- # waitforlisten 61832 00:09:55.829 12:31:04 -- common/autotest_common.sh@819 -- # '[' -z 61832 ']' 00:09:55.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.829 12:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.829 12:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:55.829 12:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.829 12:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:55.829 12:31:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.829 [2024-05-15 12:31:04.751909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:55.829 [2024-05-15 12:31:04.752094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:09:56.087 [2024-05-15 12:31:04.924774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.345 [2024-05-15 12:31:05.166388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.345 [2024-05-15 12:31:05.166570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.345 [2024-05-15 12:31:05.166755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.718 12:31:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.718 12:31:06 -- common/autotest_common.sh@852 -- # return 0 00:09:57.718 12:31:06 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:57.718 I/O targets: 00:09:57.718 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:57.718 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:57.718 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:57.718 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:57.718 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:57.718 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:57.718 00:09:57.718 00:09:57.718 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.718 http://cunit.sourceforge.net/ 00:09:57.718 00:09:57.718 00:09:57.718 Suite: bdevio tests on: Nvme3n1 00:09:57.718 Test: blockdev write read block ...passed 00:09:57.718 Test: blockdev write zeroes read block ...passed 00:09:57.718 Test: blockdev write zeroes read no split ...passed 00:09:57.718 Test: blockdev write zeroes read split ...passed 00:09:57.718 Test: blockdev write zeroes read split partial ...passed 00:09:57.718 Test: blockdev reset ...[2024-05-15 12:31:06.597770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:57.718 passed 00:09:57.718 Test: blockdev write read 8 blocks ...[2024-05-15 12:31:06.601475] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.718 passed 00:09:57.718 Test: blockdev write read size > 128k ...passed 00:09:57.718 Test: blockdev write read invalid size ...passed 00:09:57.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.718 Test: blockdev write read max offset ...passed 00:09:57.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.718 Test: blockdev writev readv 8 blocks ...passed 00:09:57.718 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.718 Test: blockdev writev readv block ...passed 00:09:57.718 Test: blockdev writev readv size > 128k ...passed 00:09:57.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.719 Test: blockdev comparev and writev ...[2024-05-15 12:31:06.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29100e000 len:0x1000 00:09:57.719 [2024-05-15 12:31:06.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:57.719 passed 00:09:57.719 Test: blockdev nvme passthru rw ...passed 00:09:57.719 Test: blockdev nvme passthru vendor specific ...passed 00:09:57.719 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:06.610313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:57.719 [2024-05-15 12:31:06.610362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:57.719 passed 00:09:57.719 Test: blockdev copy ...passed 00:09:57.719 Suite: bdevio tests on: Nvme2n3 00:09:57.719 Test: blockdev write read block ...passed 00:09:57.719 Test: blockdev write zeroes read block ...passed 00:09:57.719 Test: blockdev write zeroes read no split ...passed 00:09:57.719 Test: blockdev write zeroes read split ...passed 00:09:57.719 Test: blockdev write zeroes read split partial ...passed 00:09:57.719 Test: blockdev reset ...[2024-05-15 12:31:06.676952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:57.719 [2024-05-15 12:31:06.681047] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.719 passed 00:09:57.719 Test: blockdev write read 8 blocks ...passed 00:09:57.719 Test: blockdev write read size > 128k ...passed 00:09:57.719 Test: blockdev write read invalid size ...passed 00:09:57.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.719 Test: blockdev write read max offset ...passed 00:09:57.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.719 Test: blockdev writev readv 8 blocks ...passed 00:09:57.719 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.719 Test: blockdev writev readv block ...passed 00:09:57.719 Test: blockdev writev readv size > 128k ...passed 00:09:57.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.719 Test: blockdev comparev and writev ...[2024-05-15 12:31:06.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29100a000 len:0x1000 00:09:57.719 [2024-05-15 12:31:06.689462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:57.719 passed 00:09:57.719 Test: blockdev nvme passthru rw ...passed 00:09:57.719 Test: blockdev nvme passthru vendor specific ...passed 00:09:57.719 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:06.690322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:57.719 [2024-05-15 12:31:06.690366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:57.719 passed 00:09:57.719 Test: blockdev copy ...passed 00:09:57.719 Suite: bdevio tests on: Nvme2n2 00:09:57.719 Test: blockdev write read block ...passed 00:09:57.719 Test: blockdev write zeroes read block ...passed 00:09:57.719 Test: blockdev write zeroes read no split ...passed 00:09:57.977 Test: blockdev write zeroes read split ...passed 00:09:57.977 Test: blockdev write zeroes read split partial ...passed 00:09:57.977 Test: blockdev reset ...[2024-05-15 12:31:06.759968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:57.977 [2024-05-15 12:31:06.763905] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.977 passed 00:09:57.977 Test: blockdev write read 8 blocks ...passed 00:09:57.977 Test: blockdev write read size > 128k ...passed 00:09:57.977 Test: blockdev write read invalid size ...passed 00:09:57.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.977 Test: blockdev write read max offset ...passed 00:09:57.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.977 Test: blockdev writev readv 8 blocks ...passed 00:09:57.977 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.977 Test: blockdev writev readv block ...passed 00:09:57.977 Test: blockdev writev readv size > 128k ...passed 00:09:57.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.977 Test: blockdev comparev and writev ...[2024-05-15 12:31:06.771764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x285606000 len:0x1000 00:09:57.977 [2024-05-15 12:31:06.771831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:57.977 passed 00:09:57.977 Test: blockdev nvme passthru rw ...passed 00:09:57.977 Test: blockdev nvme passthru vendor specific ...passed 00:09:57.977 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:06.772688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:57.977 [2024-05-15 12:31:06.772734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:57.977 passed 00:09:57.977 Test: blockdev copy ...passed 00:09:57.977 Suite: bdevio tests on: Nvme2n1 00:09:57.977 Test: blockdev write read block ...passed 00:09:57.977 Test: blockdev write zeroes read block ...passed 00:09:57.977 Test: blockdev write zeroes read no split ...passed 00:09:57.977 Test: blockdev write zeroes read split ...passed 00:09:57.977 Test: blockdev write zeroes read split partial ...passed 00:09:57.977 Test: blockdev reset ...[2024-05-15 12:31:06.841322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:57.978 [2024-05-15 12:31:06.846865] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.978 passed 00:09:57.978 Test: blockdev write read 8 blocks ...passed 00:09:57.978 Test: blockdev write read size > 128k ...passed 00:09:57.978 Test: blockdev write read invalid size ...passed 00:09:57.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.978 Test: blockdev write read max offset ...passed 00:09:57.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.978 Test: blockdev writev readv 8 blocks ...passed 00:09:57.978 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.978 Test: blockdev writev readv block ...passed 00:09:57.978 Test: blockdev writev readv size > 128k ...passed 00:09:57.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.978 Test: blockdev comparev and writev ...[2024-05-15 12:31:06.855329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x285601000 len:0x1000 00:09:57.978 [2024-05-15 12:31:06.855395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:57.978 passed 00:09:57.978 Test: blockdev nvme passthru rw ...passed 00:09:57.978 Test: blockdev nvme passthru vendor specific ...passed 00:09:57.978 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:06.856304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:57.978 [2024-05-15 12:31:06.856354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:57.978 passed 00:09:57.978 Test: blockdev copy ...passed 00:09:57.978 Suite: bdevio tests on: Nvme1n1 00:09:57.978 Test: blockdev write read block ...passed 00:09:57.978 Test: blockdev write zeroes read block ...passed 00:09:57.978 Test: blockdev write zeroes read no split ...passed 00:09:57.978 Test: blockdev write zeroes read split ...passed 00:09:57.978 Test: blockdev write zeroes read split partial ...passed 00:09:57.978 Test: blockdev reset ...[2024-05-15 12:31:06.926035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:57.978 [2024-05-15 12:31:06.929676] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.978 passed 00:09:57.978 Test: blockdev write read 8 blocks ...passed 00:09:57.978 Test: blockdev write read size > 128k ...passed 00:09:57.978 Test: blockdev write read invalid size ...passed 00:09:57.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.978 Test: blockdev write read max offset ...passed 00:09:57.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.978 Test: blockdev writev readv 8 blocks ...passed 00:09:57.978 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.978 Test: blockdev writev readv block ...passed 00:09:57.978 Test: blockdev writev readv size > 128k ...passed 00:09:57.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.978 Test: blockdev comparev and writev ...[2024-05-15 12:31:06.937865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295206000 len:0x1000 00:09:57.978 [2024-05-15 12:31:06.937926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:57.978 passed 00:09:57.978 Test: blockdev nvme passthru rw ...passed 00:09:57.978 Test: blockdev nvme passthru vendor specific ...[2024-05-15 12:31:06.938781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:57.978 passed 00:09:57.978 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:06.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:57.978 passed 00:09:57.978 Test: blockdev copy ...passed 00:09:57.978 Suite: bdevio tests on: Nvme0n1 00:09:57.978 Test: blockdev write read block ...passed 00:09:57.978 Test: blockdev write zeroes read block ...passed 00:09:57.978 Test: blockdev write zeroes read no split ...passed 00:09:57.978 Test: blockdev write zeroes read split ...passed 00:09:58.236 Test: blockdev write zeroes read split partial ...passed 00:09:58.236 Test: blockdev reset ...[2024-05-15 12:31:07.008235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:58.236 [2024-05-15 12:31:07.012036] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:58.236 passed 00:09:58.236 Test: blockdev write read 8 blocks ...passed 00:09:58.236 Test: blockdev write read size > 128k ...passed 00:09:58.236 Test: blockdev write read invalid size ...passed 00:09:58.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.236 Test: blockdev write read max offset ...passed 00:09:58.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.236 Test: blockdev writev readv 8 blocks ...passed 00:09:58.236 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.236 Test: blockdev writev readv block ...passed 00:09:58.236 Test: blockdev writev readv size > 128k ...passed 00:09:58.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.236 Test: blockdev comparev and writev ...passed 00:09:58.236 Test: blockdev nvme passthru rw ...[2024-05-15 12:31:07.019671] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:58.236 separate metadata which is not supported yet. 00:09:58.236 passed 00:09:58.236 Test: blockdev nvme passthru vendor specific ...passed 00:09:58.236 Test: blockdev nvme admin passthru ...[2024-05-15 12:31:07.020170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:58.236 [2024-05-15 12:31:07.020229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:58.236 passed 00:09:58.236 Test: blockdev copy ...passed 00:09:58.236 00:09:58.236 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.236 suites 6 6 n/a 0 0 00:09:58.236 tests 138 138 138 0 0 00:09:58.236 asserts 893 893 893 0 n/a 00:09:58.236 00:09:58.236 Elapsed time = 1.324 seconds 00:09:58.236 0 00:09:58.236 12:31:07 -- bdev/blockdev.sh@293 -- # killprocess 61832 00:09:58.236 12:31:07 -- common/autotest_common.sh@926 -- # '[' -z 61832 ']' 00:09:58.236 12:31:07 -- common/autotest_common.sh@930 -- # kill -0 61832 00:09:58.236 12:31:07 -- common/autotest_common.sh@931 -- # uname 00:09:58.236 12:31:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:58.236 12:31:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61832 00:09:58.236 12:31:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:58.236 12:31:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:58.236 12:31:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61832' 00:09:58.236 killing process with pid 61832 00:09:58.236 12:31:07 -- common/autotest_common.sh@945 -- # kill 61832 00:09:58.237 12:31:07 -- common/autotest_common.sh@950 -- # wait 61832 00:09:59.173 12:31:08 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:59.173 00:09:59.173 real 0m3.446s 00:09:59.173 user 0m8.879s 00:09:59.173 sys 0m0.443s 00:09:59.173 12:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.173 ************************************ 00:09:59.173 END TEST bdev_bounds 00:09:59.173 ************************************ 00:09:59.173 12:31:08 -- common/autotest_common.sh@10 -- # set +x 00:09:59.173 12:31:08 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:59.173 12:31:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:59.173 12:31:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.173 12:31:08 -- common/autotest_common.sh@10 -- # set +x 00:09:59.173 ************************************ 00:09:59.173 START TEST bdev_nbd 00:09:59.173 ************************************ 00:09:59.173 12:31:08 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:59.173 12:31:08 -- bdev/blockdev.sh@298 -- # uname -s 00:09:59.173 12:31:08 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:59.173 12:31:08 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.173 12:31:08 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:59.173 12:31:08 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:59.173 12:31:08 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:59.173 12:31:08 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:09:59.173 12:31:08 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:59.173 12:31:08 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:59.173 12:31:08 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:59.173 12:31:08 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:09:59.173 12:31:08 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:59.173 12:31:08 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:59.173 12:31:08 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:59.173 12:31:08 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:59.173 12:31:08 -- bdev/blockdev.sh@316 -- # nbd_pid=61899 00:09:59.173 12:31:08 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:59.173 12:31:08 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:59.173 12:31:08 -- bdev/blockdev.sh@318 -- # waitforlisten 61899 /var/tmp/spdk-nbd.sock 00:09:59.173 12:31:08 -- common/autotest_common.sh@819 -- # '[' -z 61899 ']' 00:09:59.173 12:31:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:59.173 12:31:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:59.173 12:31:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:59.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:59.173 12:31:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:59.173 12:31:08 -- common/autotest_common.sh@10 -- # set +x 00:09:59.431 [2024-05-15 12:31:08.255330] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:59.431 [2024-05-15 12:31:08.255733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.431 [2024-05-15 12:31:08.428621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.998 [2024-05-15 12:31:08.724267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.934 12:31:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.934 12:31:09 -- common/autotest_common.sh@852 -- # return 0 00:10:00.934 12:31:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@24 -- # local i 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:00.934 12:31:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:01.501 12:31:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:01.501 12:31:10 -- common/autotest_common.sh@857 -- # local i 00:10:01.501 12:31:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:01.501 12:31:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:01.501 12:31:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:01.501 12:31:10 -- common/autotest_common.sh@861 -- # break 00:10:01.501 12:31:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:01.501 12:31:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:01.501 12:31:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.501 1+0 records in 00:10:01.501 1+0 records out 00:10:01.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063939 s, 6.4 MB/s 00:10:01.501 12:31:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.501 12:31:10 -- common/autotest_common.sh@874 -- # size=4096 00:10:01.501 12:31:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.501 12:31:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:01.501 12:31:10 -- common/autotest_common.sh@877 -- # return 0 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:01.501 12:31:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:01.776 12:31:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:01.776 12:31:10 -- common/autotest_common.sh@857 -- # local i 00:10:01.776 12:31:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:01.776 12:31:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:01.776 12:31:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:01.776 12:31:10 -- common/autotest_common.sh@861 -- # break 00:10:01.776 12:31:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:01.776 12:31:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:01.776 12:31:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.776 1+0 records in 00:10:01.776 1+0 records out 00:10:01.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572596 s, 7.2 MB/s 00:10:01.776 12:31:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.776 12:31:10 -- common/autotest_common.sh@874 -- # size=4096 00:10:01.776 12:31:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.776 12:31:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:01.776 12:31:10 -- common/autotest_common.sh@877 -- # return 0 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:01.776 12:31:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:02.035 12:31:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:10:02.035 12:31:10 -- common/autotest_common.sh@857 -- # local i 00:10:02.035 12:31:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:02.035 12:31:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:02.035 12:31:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:10:02.035 12:31:10 -- common/autotest_common.sh@861 -- # break 00:10:02.035 12:31:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:02.035 12:31:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:02.035 12:31:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.035 1+0 records in 00:10:02.035 1+0 records out 00:10:02.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716179 s, 5.7 MB/s 00:10:02.035 12:31:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.035 12:31:10 -- common/autotest_common.sh@874 -- # size=4096 00:10:02.035 12:31:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.035 12:31:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:02.035 12:31:10 -- common/autotest_common.sh@877 -- # return 0 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.035 12:31:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:02.293 12:31:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:02.294 12:31:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:02.294 12:31:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:02.294 12:31:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:10:02.294 12:31:11 -- common/autotest_common.sh@857 -- # local i 00:10:02.294 12:31:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:02.294 12:31:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:02.294 12:31:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:10:02.294 12:31:11 -- common/autotest_common.sh@861 -- # break 00:10:02.294 12:31:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:02.294 12:31:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:02.294 12:31:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.294 1+0 records in 00:10:02.294 1+0 records out 00:10:02.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115399 s, 3.5 MB/s 00:10:02.294 12:31:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.294 12:31:11 -- common/autotest_common.sh@874 -- # size=4096 00:10:02.294 12:31:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.294 12:31:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:02.294 12:31:11 -- common/autotest_common.sh@877 -- # return 0 00:10:02.294 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.294 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.294 12:31:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:02.860 12:31:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:10:02.860 12:31:11 -- common/autotest_common.sh@857 -- # local i 00:10:02.860 12:31:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:02.860 12:31:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:02.860 12:31:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:10:02.860 12:31:11 -- common/autotest_common.sh@861 -- # break 00:10:02.860 12:31:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:02.860 12:31:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:02.860 12:31:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.860 1+0 records in 00:10:02.860 1+0 records out 00:10:02.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705813 s, 5.8 MB/s 00:10:02.860 12:31:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.860 12:31:11 -- common/autotest_common.sh@874 -- # size=4096 00:10:02.860 12:31:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.860 12:31:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:02.860 12:31:11 -- common/autotest_common.sh@877 -- # return 0 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.860 12:31:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:03.119 12:31:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:10:03.119 12:31:11 -- common/autotest_common.sh@857 -- # local i 00:10:03.119 12:31:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:03.119 12:31:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:03.119 12:31:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:10:03.119 12:31:11 -- common/autotest_common.sh@861 -- # break 00:10:03.119 12:31:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:03.119 12:31:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:03.119 12:31:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:03.119 1+0 records in 00:10:03.119 1+0 records out 00:10:03.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086964 s, 4.7 MB/s 00:10:03.119 12:31:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.119 12:31:11 -- common/autotest_common.sh@874 -- # size=4096 00:10:03.119 12:31:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:03.119 12:31:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:03.119 12:31:11 -- common/autotest_common.sh@877 -- # return 0 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:03.119 12:31:11 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd0", 00:10:03.377 "bdev_name": "Nvme0n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd1", 00:10:03.377 "bdev_name": "Nvme1n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd2", 00:10:03.377 "bdev_name": "Nvme2n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd3", 00:10:03.377 "bdev_name": "Nvme2n2" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd4", 00:10:03.377 "bdev_name": "Nvme2n3" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd5", 00:10:03.377 "bdev_name": "Nvme3n1" 00:10:03.377 } 00:10:03.377 ]' 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd0", 00:10:03.377 "bdev_name": "Nvme0n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd1", 00:10:03.377 "bdev_name": "Nvme1n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd2", 00:10:03.377 "bdev_name": "Nvme2n1" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd3", 00:10:03.377 "bdev_name": "Nvme2n2" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd4", 00:10:03.377 "bdev_name": "Nvme2n3" 00:10:03.377 }, 00:10:03.377 { 00:10:03.377 "nbd_device": "/dev/nbd5", 00:10:03.377 "bdev_name": "Nvme3n1" 00:10:03.377 } 00:10:03.377 ]' 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@51 -- # local i 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.377 12:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@41 -- # break 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.635 12:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@41 -- # break 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.893 12:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@41 -- # break 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.152 12:31:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@41 -- # break 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.410 12:31:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.667 12:31:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:04.925 12:31:13 -- bdev/nbd_common.sh@41 -- # break 00:10:04.925 12:31:13 -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.925 12:31:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.925 12:31:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:05.182 12:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@41 -- # break 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.182 12:31:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@65 -- # true 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@65 -- # count=0 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@122 -- # count=0 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@127 -- # return 0 00:10:05.481 12:31:14 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@12 -- # local i 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.481 12:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:05.760 /dev/nbd0 00:10:05.760 12:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:05.760 12:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:05.760 12:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:05.760 12:31:14 -- common/autotest_common.sh@857 -- # local i 00:10:05.760 12:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:05.760 12:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:05.760 12:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:05.760 12:31:14 -- common/autotest_common.sh@861 -- # break 00:10:05.760 12:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:05.760 12:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:05.760 12:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.760 1+0 records in 00:10:05.760 1+0 records out 00:10:05.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558306 s, 7.3 MB/s 00:10:05.760 12:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.760 12:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:10:05.760 12:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.760 12:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:05.760 12:31:14 -- common/autotest_common.sh@877 -- # return 0 00:10:05.760 12:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.760 12:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.760 12:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:06.017 /dev/nbd1 00:10:06.017 12:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:06.017 12:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:06.017 12:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:06.017 12:31:14 -- common/autotest_common.sh@857 -- # local i 00:10:06.017 12:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:06.017 12:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:06.017 12:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:06.017 12:31:14 -- common/autotest_common.sh@861 -- # break 00:10:06.017 12:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:06.017 12:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:06.017 12:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.017 1+0 records in 00:10:06.017 1+0 records out 00:10:06.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000863863 s, 4.7 MB/s 00:10:06.017 12:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.017 12:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:10:06.017 12:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.017 12:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:06.017 12:31:15 -- common/autotest_common.sh@877 -- # return 0 00:10:06.017 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:06.017 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:06.017 12:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:06.275 /dev/nbd10 00:10:06.275 12:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:06.275 12:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:06.275 12:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:10:06.275 12:31:15 -- common/autotest_common.sh@857 -- # local i 00:10:06.275 12:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:06.275 12:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:06.275 12:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:10:06.275 12:31:15 -- common/autotest_common.sh@861 -- # break 00:10:06.275 12:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:06.275 12:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:06.275 12:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.275 1+0 records in 00:10:06.275 1+0 records out 00:10:06.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573866 s, 7.1 MB/s 00:10:06.275 12:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.275 12:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:06.275 12:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.275 12:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:06.275 12:31:15 -- common/autotest_common.sh@877 -- # return 0 00:10:06.275 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:06.275 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:06.275 12:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:06.840 /dev/nbd11 00:10:06.840 12:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:06.840 12:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:06.840 12:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:10:06.840 12:31:15 -- common/autotest_common.sh@857 -- # local i 00:10:06.840 12:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:06.840 12:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:06.840 12:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:10:06.840 12:31:15 -- common/autotest_common.sh@861 -- # break 00:10:06.840 12:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:06.840 12:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:06.840 12:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.840 1+0 records in 00:10:06.840 1+0 records out 00:10:06.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654159 s, 6.3 MB/s 00:10:06.840 12:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.840 12:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:06.840 12:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.840 12:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:06.840 12:31:15 -- common/autotest_common.sh@877 -- # return 0 00:10:06.840 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:06.840 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:06.840 12:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:07.098 /dev/nbd12 00:10:07.098 12:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:07.098 12:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:07.098 12:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:10:07.098 12:31:15 -- common/autotest_common.sh@857 -- # local i 00:10:07.098 12:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:07.098 12:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:07.098 12:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:10:07.098 12:31:15 -- common/autotest_common.sh@861 -- # break 00:10:07.098 12:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:07.098 12:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:07.098 12:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.098 1+0 records in 00:10:07.098 1+0 records out 00:10:07.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000891805 s, 4.6 MB/s 00:10:07.098 12:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.098 12:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:07.098 12:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.098 12:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:07.098 12:31:15 -- common/autotest_common.sh@877 -- # return 0 00:10:07.098 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.098 12:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:07.098 12:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:07.356 /dev/nbd13 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:07.356 12:31:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:10:07.356 12:31:16 -- common/autotest_common.sh@857 -- # local i 00:10:07.356 12:31:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:07.356 12:31:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:07.356 12:31:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:10:07.356 12:31:16 -- common/autotest_common.sh@861 -- # break 00:10:07.356 12:31:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:07.356 12:31:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:07.356 12:31:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.356 1+0 records in 00:10:07.356 1+0 records out 00:10:07.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517392 s, 7.9 MB/s 00:10:07.356 12:31:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.356 12:31:16 -- common/autotest_common.sh@874 -- # size=4096 00:10:07.356 12:31:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.356 12:31:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:07.356 12:31:16 -- common/autotest_common.sh@877 -- # return 0 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.356 12:31:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd0", 00:10:07.615 "bdev_name": "Nvme0n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd1", 00:10:07.615 "bdev_name": "Nvme1n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd10", 00:10:07.615 "bdev_name": "Nvme2n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd11", 00:10:07.615 "bdev_name": "Nvme2n2" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd12", 00:10:07.615 "bdev_name": "Nvme2n3" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd13", 00:10:07.615 "bdev_name": "Nvme3n1" 00:10:07.615 } 00:10:07.615 ]' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd0", 00:10:07.615 "bdev_name": "Nvme0n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd1", 00:10:07.615 "bdev_name": "Nvme1n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd10", 00:10:07.615 "bdev_name": "Nvme2n1" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd11", 00:10:07.615 "bdev_name": "Nvme2n2" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd12", 00:10:07.615 "bdev_name": "Nvme2n3" 00:10:07.615 }, 00:10:07.615 { 00:10:07.615 "nbd_device": "/dev/nbd13", 00:10:07.615 "bdev_name": "Nvme3n1" 00:10:07.615 } 00:10:07.615 ]' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:07.615 /dev/nbd1 00:10:07.615 /dev/nbd10 00:10:07.615 /dev/nbd11 00:10:07.615 /dev/nbd12 00:10:07.615 /dev/nbd13' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:07.615 /dev/nbd1 00:10:07.615 /dev/nbd10 00:10:07.615 /dev/nbd11 00:10:07.615 /dev/nbd12 00:10:07.615 /dev/nbd13' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@65 -- # count=6 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@66 -- # echo 6 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@95 -- # count=6 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:07.615 256+0 records in 00:10:07.615 256+0 records out 00:10:07.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00909593 s, 115 MB/s 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.615 12:31:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:07.874 256+0 records in 00:10:07.874 256+0 records out 00:10:07.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137542 s, 7.6 MB/s 00:10:07.874 12:31:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.874 12:31:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:07.874 256+0 records in 00:10:07.874 256+0 records out 00:10:07.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149226 s, 7.0 MB/s 00:10:07.874 12:31:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.874 12:31:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:08.133 256+0 records in 00:10:08.133 256+0 records out 00:10:08.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147368 s, 7.1 MB/s 00:10:08.133 12:31:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:08.133 12:31:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:08.390 256+0 records in 00:10:08.390 256+0 records out 00:10:08.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157292 s, 6.7 MB/s 00:10:08.390 12:31:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:08.390 12:31:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:08.390 256+0 records in 00:10:08.390 256+0 records out 00:10:08.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135502 s, 7.7 MB/s 00:10:08.390 12:31:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:08.390 12:31:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:08.648 256+0 records in 00:10:08.648 256+0 records out 00:10:08.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162422 s, 6.5 MB/s 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@51 -- # local i 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.648 12:31:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@41 -- # break 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.906 12:31:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@41 -- # break 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.164 12:31:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@41 -- # break 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.729 12:31:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@41 -- # break 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.987 12:31:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@41 -- # break 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.245 12:31:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@41 -- # break 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.503 12:31:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@65 -- # true 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@65 -- # count=0 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@104 -- # count=0 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@109 -- # return 0 00:10:10.762 12:31:19 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:10.762 12:31:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:11.021 malloc_lvol_verify 00:10:11.021 12:31:19 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:11.280 2a8b5e4e-30ed-473b-b637-6be4d8f84114 00:10:11.280 12:31:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:11.537 a45afa99-3010-47ce-9217-a6f10082d5eb 00:10:11.537 12:31:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:11.795 /dev/nbd0 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:11.796 mke2fs 1.46.5 (30-Dec-2021) 00:10:11.796 Discarding device blocks: 0/4096 done 00:10:11.796 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:11.796 00:10:11.796 Allocating group tables: 0/1 done 00:10:11.796 Writing inode tables: 0/1 done 00:10:11.796 Creating journal (1024 blocks): done 00:10:11.796 Writing superblocks and filesystem accounting information: 0/1 done 00:10:11.796 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@51 -- # local i 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.796 12:31:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@41 -- # break 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:12.054 12:31:21 -- bdev/nbd_common.sh@147 -- # return 0 00:10:12.054 12:31:21 -- bdev/blockdev.sh@324 -- # killprocess 61899 00:10:12.054 12:31:21 -- common/autotest_common.sh@926 -- # '[' -z 61899 ']' 00:10:12.054 12:31:21 -- common/autotest_common.sh@930 -- # kill -0 61899 00:10:12.054 12:31:21 -- common/autotest_common.sh@931 -- # uname 00:10:12.054 12:31:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:12.054 12:31:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61899 00:10:12.054 12:31:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:12.054 12:31:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:12.054 killing process with pid 61899 00:10:12.054 12:31:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61899' 00:10:12.054 12:31:21 -- common/autotest_common.sh@945 -- # kill 61899 00:10:12.054 12:31:21 -- common/autotest_common.sh@950 -- # wait 61899 00:10:13.429 12:31:22 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:13.429 00:10:13.429 real 0m14.018s 00:10:13.429 user 0m19.914s 00:10:13.429 sys 0m4.320s 00:10:13.429 12:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.429 ************************************ 00:10:13.429 END TEST bdev_nbd 00:10:13.429 12:31:22 -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 ************************************ 00:10:13.429 12:31:22 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:13.429 skipping fio tests on NVMe due to multi-ns failures. 00:10:13.429 12:31:22 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:13.429 12:31:22 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:13.429 12:31:22 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:13.429 12:31:22 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:13.429 12:31:22 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:13.429 12:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.429 12:31:22 -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 ************************************ 00:10:13.429 START TEST bdev_verify 00:10:13.429 ************************************ 00:10:13.430 12:31:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:13.430 [2024-05-15 12:31:22.332711] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:13.430 [2024-05-15 12:31:22.332905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:10:13.687 [2024-05-15 12:31:22.510291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.945 [2024-05-15 12:31:22.799554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.945 [2024-05-15 12:31:22.799570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.511 Running I/O for 5 seconds... 00:10:19.782 00:10:19.782 Latency(us) 00:10:19.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.782 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0xbd0bd 00:10:19.782 Nvme0n1 : 5.04 2680.60 10.47 0.00 0.00 47589.44 7626.01 62914.56 00:10:19.782 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:19.782 Nvme0n1 : 5.05 2701.07 10.55 0.00 0.00 47236.06 5898.24 57671.68 00:10:19.782 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0xa0000 00:10:19.782 Nvme1n1 : 5.05 2685.11 10.49 0.00 0.00 47497.73 5242.88 59339.87 00:10:19.782 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0xa0000 length 0xa0000 00:10:19.782 Nvme1n1 : 5.06 2699.76 10.55 0.00 0.00 47206.19 6851.49 54335.30 00:10:19.782 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0x80000 00:10:19.782 Nvme2n1 : 5.05 2684.13 10.48 0.00 0.00 47390.54 5898.24 49807.36 00:10:19.782 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x80000 length 0x80000 00:10:19.782 Nvme2n1 : 5.06 2698.49 10.54 0.00 0.00 47077.74 7983.48 48615.80 00:10:19.782 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0x80000 00:10:19.782 Nvme2n2 : 5.06 2683.00 10.48 0.00 0.00 47363.15 6732.33 50760.61 00:10:19.782 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x80000 length 0x80000 00:10:19.782 Nvme2n2 : 5.06 2705.41 10.57 0.00 0.00 46961.12 2159.71 48854.11 00:10:19.782 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0x80000 00:10:19.782 Nvme2n3 : 5.06 2681.68 10.48 0.00 0.00 47325.31 8043.05 51952.17 00:10:19.782 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x80000 length 0x80000 00:10:19.782 Nvme2n3 : 5.07 2704.61 10.56 0.00 0.00 46928.09 2681.02 49569.05 00:10:19.782 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x0 length 0x20000 00:10:19.782 Nvme3n1 : 5.06 2688.13 10.50 0.00 0.00 47200.43 1139.43 52428.80 00:10:19.782 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.782 Verification LBA range: start 0x20000 length 0x20000 00:10:19.782 Nvme3n1 : 5.07 2703.83 10.56 0.00 0.00 46899.05 3306.59 48854.11 00:10:19.782 =================================================================================================================== 00:10:19.782 Total : 32315.81 126.23 0.00 0.00 47221.99 1139.43 62914.56 00:10:31.990 00:10:31.990 real 0m17.370s 00:10:31.990 user 0m32.910s 00:10:31.990 sys 0m0.479s 00:10:31.990 12:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.990 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:10:31.990 ************************************ 00:10:31.990 END TEST bdev_verify 00:10:31.990 ************************************ 00:10:31.990 12:31:39 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:31.990 12:31:39 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:31.990 12:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.990 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:10:31.990 ************************************ 00:10:31.990 START TEST bdev_verify_big_io 00:10:31.990 ************************************ 00:10:31.990 12:31:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:31.990 [2024-05-15 12:31:39.732583] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:31.990 [2024-05-15 12:31:39.732793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62484 ] 00:10:31.990 [2024-05-15 12:31:39.906683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.990 [2024-05-15 12:31:40.196096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.990 [2024-05-15 12:31:40.196099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.248 Running I/O for 5 seconds... 00:10:37.518 00:10:37.518 Latency(us) 00:10:37.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.518 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0xbd0b 00:10:37.518 Nvme0n1 : 5.32 286.31 17.89 0.00 0.00 438760.86 47900.86 602454.57 00:10:37.518 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:37.518 Nvme0n1 : 5.39 282.75 17.67 0.00 0.00 424485.09 6434.44 421336.90 00:10:37.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0xa000 00:10:37.518 Nvme1n1 : 5.36 292.31 18.27 0.00 0.00 426067.19 34793.66 552885.53 00:10:37.518 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0xa000 length 0xa000 00:10:37.518 Nvme1n1 : 5.39 282.64 17.67 0.00 0.00 418590.84 6970.65 409897.89 00:10:37.518 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0x8000 00:10:37.518 Nvme2n1 : 5.36 292.19 18.26 0.00 0.00 420851.99 35508.60 503316.48 00:10:37.518 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x8000 length 0x8000 00:10:37.518 Nvme2n1 : 5.33 261.22 16.33 0.00 0.00 478793.40 65297.69 632958.60 00:10:37.518 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0x8000 00:10:37.518 Nvme2n2 : 5.36 292.08 18.26 0.00 0.00 415714.04 35746.91 457560.44 00:10:37.518 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x8000 length 0x8000 00:10:37.518 Nvme2n2 : 5.37 266.23 16.64 0.00 0.00 465313.60 43134.60 583389.56 00:10:37.518 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0x8000 00:10:37.518 Nvme2n3 : 5.37 299.79 18.74 0.00 0.00 401845.59 2740.60 495690.47 00:10:37.518 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x8000 length 0x8000 00:10:37.518 Nvme2n3 : 5.37 266.13 16.63 0.00 0.00 459126.63 43134.60 526194.50 00:10:37.518 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x0 length 0x2000 00:10:37.518 Nvme3n1 : 5.38 308.53 19.28 0.00 0.00 386063.69 3798.11 499503.48 00:10:37.518 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:37.518 Verification LBA range: start 0x2000 length 0x2000 00:10:37.518 Nvme3n1 : 5.38 274.69 17.17 0.00 0.00 441940.38 4289.63 470905.95 00:10:37.518 =================================================================================================================== 00:10:37.518 Total : 3404.86 212.80 0.00 0.00 430225.80 2740.60 632958.60 00:10:39.419 00:10:39.419 real 0m8.626s 00:10:39.419 user 0m15.704s 00:10:39.419 sys 0m0.381s 00:10:39.419 12:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.419 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:10:39.419 ************************************ 00:10:39.419 END TEST bdev_verify_big_io 00:10:39.419 ************************************ 00:10:39.419 12:31:48 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:39.419 12:31:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:39.419 12:31:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.419 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:10:39.419 ************************************ 00:10:39.419 START TEST bdev_write_zeroes 00:10:39.419 ************************************ 00:10:39.419 12:31:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:39.419 [2024-05-15 12:31:48.416775] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:39.419 [2024-05-15 12:31:48.416968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62594 ] 00:10:39.678 [2024-05-15 12:31:48.594999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.949 [2024-05-15 12:31:48.827808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.535 Running I/O for 1 seconds... 00:10:41.908 00:10:41.908 Latency(us) 00:10:41.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.908 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme0n1 : 1.02 9894.94 38.65 0.00 0.00 12883.14 10664.49 23950.43 00:10:41.908 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme1n1 : 1.02 9878.85 38.59 0.00 0.00 12880.37 11021.96 23831.27 00:10:41.908 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme2n1 : 1.02 9898.31 38.67 0.00 0.00 12841.93 10307.03 20614.05 00:10:41.908 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme2n2 : 1.02 9882.35 38.60 0.00 0.00 12803.43 10783.65 19541.64 00:10:41.908 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme2n3 : 1.02 9866.83 38.54 0.00 0.00 12802.67 10426.18 19422.49 00:10:41.908 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:41.908 Nvme3n1 : 1.03 9891.38 38.64 0.00 0.00 12720.45 6702.55 19303.33 00:10:41.908 =================================================================================================================== 00:10:41.908 Total : 59312.65 231.69 0.00 0.00 12821.76 6702.55 23950.43 00:10:42.841 00:10:42.841 real 0m3.410s 00:10:42.841 user 0m2.982s 00:10:42.841 sys 0m0.298s 00:10:42.841 12:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.841 12:31:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.841 ************************************ 00:10:42.841 END TEST bdev_write_zeroes 00:10:42.841 ************************************ 00:10:42.841 12:31:51 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:42.841 12:31:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:42.841 12:31:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.841 12:31:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.841 ************************************ 00:10:42.841 START TEST bdev_json_nonenclosed 00:10:42.841 ************************************ 00:10:42.841 12:31:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:43.100 [2024-05-15 12:31:51.877962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:43.100 [2024-05-15 12:31:51.878172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:10:43.100 [2024-05-15 12:31:52.055017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.358 [2024-05-15 12:31:52.340703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.358 [2024-05-15 12:31:52.340964] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:43.358 [2024-05-15 12:31:52.340993] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.924 ************************************ 00:10:43.924 END TEST bdev_json_nonenclosed 00:10:43.924 ************************************ 00:10:43.924 00:10:43.924 real 0m0.997s 00:10:43.924 user 0m0.732s 00:10:43.924 sys 0m0.157s 00:10:43.924 12:31:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.924 12:31:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 12:31:52 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:43.924 12:31:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:43.924 12:31:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.924 12:31:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 ************************************ 00:10:43.924 START TEST bdev_json_nonarray 00:10:43.924 ************************************ 00:10:43.924 12:31:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:43.924 [2024-05-15 12:31:52.928379] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:43.924 [2024-05-15 12:31:52.928602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62685 ] 00:10:44.182 [2024-05-15 12:31:53.103360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.440 [2024-05-15 12:31:53.334682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.440 [2024-05-15 12:31:53.334954] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:44.440 [2024-05-15 12:31:53.334983] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:45.006 00:10:45.006 real 0m0.948s 00:10:45.006 user 0m0.683s 00:10:45.006 sys 0m0.157s 00:10:45.006 ************************************ 00:10:45.006 END TEST bdev_json_nonarray 00:10:45.006 ************************************ 00:10:45.006 12:31:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.006 12:31:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.006 12:31:53 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:45.006 12:31:53 -- bdev/blockdev.sh@809 -- # cleanup 00:10:45.006 12:31:53 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:45.006 12:31:53 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:45.006 12:31:53 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:10:45.006 12:31:53 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:10:45.006 00:10:45.006 real 0m55.897s 00:10:45.006 user 1m28.340s 00:10:45.006 sys 0m7.499s 00:10:45.006 ************************************ 00:10:45.006 END TEST blockdev_nvme 00:10:45.006 ************************************ 00:10:45.006 12:31:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.006 12:31:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.006 12:31:53 -- spdk/autotest.sh@219 -- # uname -s 00:10:45.006 12:31:53 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:10:45.006 12:31:53 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:45.006 12:31:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:45.006 12:31:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.006 12:31:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.006 ************************************ 00:10:45.006 START TEST blockdev_nvme_gpt 00:10:45.006 ************************************ 00:10:45.006 12:31:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:45.006 * Looking for test storage... 00:10:45.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:45.006 12:31:53 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:45.006 12:31:53 -- bdev/nbd_common.sh@6 -- # set -e 00:10:45.006 12:31:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:45.006 12:31:53 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:45.006 12:31:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:45.006 12:31:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:45.006 12:31:53 -- bdev/blockdev.sh@18 -- # : 00:10:45.006 12:31:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:45.006 12:31:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:45.006 12:31:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:45.006 12:31:53 -- bdev/blockdev.sh@672 -- # uname -s 00:10:45.006 12:31:53 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:45.006 12:31:53 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:45.007 12:31:53 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:10:45.007 12:31:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:45.007 12:31:53 -- bdev/blockdev.sh@682 -- # dek= 00:10:45.007 12:31:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:45.007 12:31:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:45.007 12:31:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:45.007 12:31:53 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:10:45.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.007 12:31:53 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:10:45.007 12:31:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:45.007 12:31:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62760 00:10:45.007 12:31:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:45.007 12:31:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:45.007 12:31:53 -- bdev/blockdev.sh@47 -- # waitforlisten 62760 00:10:45.007 12:31:53 -- common/autotest_common.sh@819 -- # '[' -z 62760 ']' 00:10:45.007 12:31:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.007 12:31:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:45.007 12:31:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.007 12:31:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:45.007 12:31:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.265 [2024-05-15 12:31:54.107705] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:45.265 [2024-05-15 12:31:54.107867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62760 ] 00:10:45.523 [2024-05-15 12:31:54.283487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.523 [2024-05-15 12:31:54.528164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.523 [2024-05-15 12:31:54.528405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.898 12:31:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:46.898 12:31:55 -- common/autotest_common.sh@852 -- # return 0 00:10:46.898 12:31:55 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:46.898 12:31:55 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:10:46.898 12:31:55 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:47.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.414 Waiting for block devices as requested 00:10:47.414 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.414 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.672 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.672 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.940 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:52.940 12:32:01 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:10:52.941 12:32:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:10:52.941 12:32:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:10:52.941 12:32:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:52.941 12:32:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:10:52.941 12:32:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:52.941 12:32:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:52.941 12:32:01 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:10:52.941 12:32:01 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:10:52.941 12:32:01 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:10:52.941 12:32:01 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:52.941 12:32:01 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:10:52.941 12:32:01 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:10:52.941 12:32:01 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:10:52.941 12:32:01 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:10:52.941 BYT; 00:10:52.941 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:52.941 12:32:01 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:10:52.941 BYT; 00:10:52.941 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:52.941 12:32:01 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:10:52.941 12:32:01 -- bdev/blockdev.sh@114 -- # break 00:10:52.941 12:32:01 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:10:52.941 12:32:01 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:52.941 12:32:01 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:52.941 12:32:01 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:52.941 12:32:01 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:10:52.941 12:32:01 -- scripts/common.sh@410 -- # local spdk_guid 00:10:52.941 12:32:01 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:52.941 12:32:01 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:52.941 12:32:01 -- scripts/common.sh@415 -- # IFS='()' 00:10:52.941 12:32:01 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:10:52.941 12:32:01 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:52.941 12:32:01 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:52.941 12:32:01 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:52.941 12:32:01 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:52.941 12:32:01 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:52.941 12:32:01 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:10:52.941 12:32:01 -- scripts/common.sh@422 -- # local spdk_guid 00:10:52.941 12:32:01 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:52.941 12:32:01 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:52.941 12:32:01 -- scripts/common.sh@427 -- # IFS='()' 00:10:52.941 12:32:01 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:10:52.941 12:32:01 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:52.941 12:32:01 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:52.941 12:32:01 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:52.941 12:32:01 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:52.941 12:32:01 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:52.941 12:32:01 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:10:53.878 The operation has completed successfully. 00:10:53.878 12:32:02 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:10:54.830 The operation has completed successfully. 00:10:54.830 12:32:03 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:55.765 lsblk: /dev/nvme0c0n1: not a block device 00:10:56.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.023 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.023 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.023 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.281 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.281 12:32:05 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:10:56.281 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.281 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.281 [] 00:10:56.281 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.281 12:32:05 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:10:56.281 12:32:05 -- bdev/blockdev.sh@79 -- # local json 00:10:56.281 12:32:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:56.281 12:32:05 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:56.281 12:32:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:10:56.281 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.281 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.539 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.539 12:32:05 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:56.539 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.539 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.539 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.539 12:32:05 -- bdev/blockdev.sh@738 -- # cat 00:10:56.539 12:32:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:56.539 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.539 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.539 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.539 12:32:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:56.539 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.539 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.797 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.797 12:32:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:56.797 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.797 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.797 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.797 12:32:05 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:56.797 12:32:05 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:56.797 12:32:05 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:56.797 12:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.797 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.797 12:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.797 12:32:05 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:56.797 12:32:05 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:56.798 12:32:05 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b59586f9-fc9f-4307-ba9e-5905602ea485"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b59586f9-fc9f-4307-ba9e-5905602ea485",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5248d172-7d8a-4077-9a5f-7d596b497410"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5248d172-7d8a-4077-9a5f-7d596b497410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e57eabca-c820-43ec-bd95-2ee9ba5b7cb0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e57eabca-c820-43ec-bd95-2ee9ba5b7cb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ebf2b1df-e7d8-4fb3-a0de-ac4ebbe04926"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ebf2b1df-e7d8-4fb3-a0de-ac4ebbe04926",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e3395a59-4ec7-4fd3-802b-220944d4c281"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e3395a59-4ec7-4fd3-802b-220944d4c281",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:56.798 12:32:05 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:56.798 12:32:05 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:10:56.798 12:32:05 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:56.798 12:32:05 -- bdev/blockdev.sh@752 -- # killprocess 62760 00:10:56.798 12:32:05 -- common/autotest_common.sh@926 -- # '[' -z 62760 ']' 00:10:56.798 12:32:05 -- common/autotest_common.sh@930 -- # kill -0 62760 00:10:56.798 12:32:05 -- common/autotest_common.sh@931 -- # uname 00:10:56.798 12:32:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:56.798 12:32:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62760 00:10:56.798 killing process with pid 62760 00:10:56.798 12:32:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:56.798 12:32:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:56.798 12:32:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62760' 00:10:56.798 12:32:05 -- common/autotest_common.sh@945 -- # kill 62760 00:10:56.798 12:32:05 -- common/autotest_common.sh@950 -- # wait 62760 00:10:59.323 12:32:07 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:59.323 12:32:07 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:59.323 12:32:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:59.323 12:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.323 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:10:59.323 ************************************ 00:10:59.323 START TEST bdev_hello_world 00:10:59.323 ************************************ 00:10:59.323 12:32:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:59.323 [2024-05-15 12:32:08.021928] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:59.323 [2024-05-15 12:32:08.022294] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:10:59.323 [2024-05-15 12:32:08.186952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.579 [2024-05-15 12:32:08.419746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.143 [2024-05-15 12:32:09.048796] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:00.143 [2024-05-15 12:32:09.048860] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:11:00.143 [2024-05-15 12:32:09.048904] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:00.143 [2024-05-15 12:32:09.051996] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:00.143 [2024-05-15 12:32:09.052580] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:00.143 [2024-05-15 12:32:09.052622] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:00.143 [2024-05-15 12:32:09.052806] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:00.143 00:11:00.143 [2024-05-15 12:32:09.052839] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:01.516 00:11:01.516 real 0m2.248s 00:11:01.516 user 0m1.881s 00:11:01.516 sys 0m0.256s 00:11:01.516 ************************************ 00:11:01.516 END TEST bdev_hello_world 00:11:01.516 ************************************ 00:11:01.516 12:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.516 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.516 12:32:10 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:01.516 12:32:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:01.516 12:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.516 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.516 ************************************ 00:11:01.516 START TEST bdev_bounds 00:11:01.516 ************************************ 00:11:01.516 12:32:10 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:01.516 12:32:10 -- bdev/blockdev.sh@288 -- # bdevio_pid=63499 00:11:01.516 Process bdevio pid: 63499 00:11:01.516 12:32:10 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:01.516 12:32:10 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63499' 00:11:01.516 12:32:10 -- bdev/blockdev.sh@291 -- # waitforlisten 63499 00:11:01.516 12:32:10 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:01.516 12:32:10 -- common/autotest_common.sh@819 -- # '[' -z 63499 ']' 00:11:01.516 12:32:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.516 12:32:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.516 12:32:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.516 12:32:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:01.516 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.516 [2024-05-15 12:32:10.318343] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:01.516 [2024-05-15 12:32:10.318475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63499 ] 00:11:01.516 [2024-05-15 12:32:10.483444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.852 [2024-05-15 12:32:10.718146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.852 [2024-05-15 12:32:10.718276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.852 [2024-05-15 12:32:10.718309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.230 12:32:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:03.230 12:32:12 -- common/autotest_common.sh@852 -- # return 0 00:11:03.230 12:32:12 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:03.230 I/O targets: 00:11:03.230 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:11:03.230 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:11:03.230 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:03.230 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.230 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.230 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.230 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:03.230 00:11:03.230 00:11:03.230 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.230 http://cunit.sourceforge.net/ 00:11:03.230 00:11:03.230 00:11:03.230 Suite: bdevio tests on: Nvme3n1 00:11:03.230 Test: blockdev write read block ...passed 00:11:03.230 Test: blockdev write zeroes read block ...passed 00:11:03.230 Test: blockdev write zeroes read no split ...passed 00:11:03.230 Test: blockdev write zeroes read split ...passed 00:11:03.230 Test: blockdev write zeroes read split partial ...passed 00:11:03.230 Test: blockdev reset ...[2024-05-15 12:32:12.185316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:03.230 passed 00:11:03.230 Test: blockdev write read 8 blocks ...[2024-05-15 12:32:12.188820] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.230 passed 00:11:03.230 Test: blockdev write read size > 128k ...passed 00:11:03.230 Test: blockdev write read invalid size ...passed 00:11:03.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.230 Test: blockdev write read max offset ...passed 00:11:03.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.230 Test: blockdev writev readv 8 blocks ...passed 00:11:03.230 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.230 Test: blockdev writev readv block ...passed 00:11:03.230 Test: blockdev writev readv size > 128k ...passed 00:11:03.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.230 Test: blockdev comparev and writev ...[2024-05-15 12:32:12.196906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28b60a000 len:0x1000 00:11:03.231 [2024-05-15 12:32:12.196966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.231 passed 00:11:03.231 Test: blockdev nvme passthru rw ...passed 00:11:03.231 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.231 Test: blockdev nvme admin passthru ...[2024-05-15 12:32:12.197971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.231 [2024-05-15 12:32:12.198016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.231 passed 00:11:03.231 Test: blockdev copy ...passed 00:11:03.231 Suite: bdevio tests on: Nvme2n3 00:11:03.231 Test: blockdev write read block ...passed 00:11:03.231 Test: blockdev write zeroes read block ...passed 00:11:03.231 Test: blockdev write zeroes read no split ...passed 00:11:03.231 Test: blockdev write zeroes read split ...passed 00:11:03.489 Test: blockdev write zeroes read split partial ...passed 00:11:03.489 Test: blockdev reset ...[2024-05-15 12:32:12.264054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:03.489 [2024-05-15 12:32:12.267910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.489 passed 00:11:03.489 Test: blockdev write read 8 blocks ...passed 00:11:03.489 Test: blockdev write read size > 128k ...passed 00:11:03.489 Test: blockdev write read invalid size ...passed 00:11:03.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.489 Test: blockdev write read max offset ...passed 00:11:03.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.489 Test: blockdev writev readv 8 blocks ...passed 00:11:03.489 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.489 Test: blockdev writev readv block ...passed 00:11:03.489 Test: blockdev writev readv size > 128k ...passed 00:11:03.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.489 Test: blockdev comparev and writev ...[2024-05-15 12:32:12.276412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a504000 len:0x1000 00:11:03.489 [2024-05-15 12:32:12.276468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.489 passed 00:11:03.489 Test: blockdev nvme passthru rw ...passed 00:11:03.489 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.489 Test: blockdev nvme admin passthru ...[2024-05-15 12:32:12.277275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.489 [2024-05-15 12:32:12.277318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.489 passed 00:11:03.489 Test: blockdev copy ...passed 00:11:03.489 Suite: bdevio tests on: Nvme2n2 00:11:03.489 Test: blockdev write read block ...passed 00:11:03.489 Test: blockdev write zeroes read block ...passed 00:11:03.489 Test: blockdev write zeroes read no split ...passed 00:11:03.489 Test: blockdev write zeroes read split ...passed 00:11:03.489 Test: blockdev write zeroes read split partial ...passed 00:11:03.489 Test: blockdev reset ...[2024-05-15 12:32:12.339059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:03.489 [2024-05-15 12:32:12.342701] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.489 passed 00:11:03.489 Test: blockdev write read 8 blocks ...passed 00:11:03.489 Test: blockdev write read size > 128k ...passed 00:11:03.489 Test: blockdev write read invalid size ...passed 00:11:03.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.489 Test: blockdev write read max offset ...passed 00:11:03.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.490 Test: blockdev writev readv 8 blocks ...passed 00:11:03.490 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.490 Test: blockdev writev readv block ...passed 00:11:03.490 Test: blockdev writev readv size > 128k ...passed 00:11:03.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.490 Test: blockdev comparev and writev ...[2024-05-15 12:32:12.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a504000 len:0x1000 00:11:03.490 [2024-05-15 12:32:12.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.490 passed 00:11:03.490 Test: blockdev nvme passthru rw ...passed 00:11:03.490 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.490 Test: blockdev nvme admin passthru ...[2024-05-15 12:32:12.352274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.490 [2024-05-15 12:32:12.352317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.490 passed 00:11:03.490 Test: blockdev copy ...passed 00:11:03.490 Suite: bdevio tests on: Nvme2n1 00:11:03.490 Test: blockdev write read block ...passed 00:11:03.490 Test: blockdev write zeroes read block ...passed 00:11:03.490 Test: blockdev write zeroes read no split ...passed 00:11:03.490 Test: blockdev write zeroes read split ...passed 00:11:03.490 Test: blockdev write zeroes read split partial ...passed 00:11:03.490 Test: blockdev reset ...[2024-05-15 12:32:12.417559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:03.490 [2024-05-15 12:32:12.421299] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.490 passed 00:11:03.490 Test: blockdev write read 8 blocks ...passed 00:11:03.490 Test: blockdev write read size > 128k ...passed 00:11:03.490 Test: blockdev write read invalid size ...passed 00:11:03.490 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.490 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.490 Test: blockdev write read max offset ...passed 00:11:03.490 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.490 Test: blockdev writev readv 8 blocks ...passed 00:11:03.490 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.490 Test: blockdev writev readv block ...passed 00:11:03.490 Test: blockdev writev readv size > 128k ...passed 00:11:03.490 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.490 Test: blockdev comparev and writev ...[2024-05-15 12:32:12.428942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a03c000 len:0x1000 00:11:03.490 [2024-05-15 12:32:12.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.490 passed 00:11:03.490 Test: blockdev nvme passthru rw ...passed 00:11:03.490 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.490 Test: blockdev nvme admin passthru ...[2024-05-15 12:32:12.429765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.490 [2024-05-15 12:32:12.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.490 passed 00:11:03.490 Test: blockdev copy ...passed 00:11:03.490 Suite: bdevio tests on: Nvme1n1 00:11:03.490 Test: blockdev write read block ...passed 00:11:03.490 Test: blockdev write zeroes read block ...passed 00:11:03.490 Test: blockdev write zeroes read no split ...passed 00:11:03.490 Test: blockdev write zeroes read split ...passed 00:11:03.490 Test: blockdev write zeroes read split partial ...passed 00:11:03.490 Test: blockdev reset ...[2024-05-15 12:32:12.496691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:03.749 [2024-05-15 12:32:12.500273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.749 passed 00:11:03.749 Test: blockdev write read 8 blocks ...passed 00:11:03.749 Test: blockdev write read size > 128k ...passed 00:11:03.749 Test: blockdev write read invalid size ...passed 00:11:03.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.749 Test: blockdev write read max offset ...passed 00:11:03.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.749 Test: blockdev writev readv 8 blocks ...passed 00:11:03.749 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.749 Test: blockdev writev readv block ...passed 00:11:03.749 Test: blockdev writev readv size > 128k ...passed 00:11:03.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.749 Test: blockdev comparev and writev ...[2024-05-15 12:32:12.508048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a038000 len:0x1000 00:11:03.749 [2024-05-15 12:32:12.508108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.749 passed 00:11:03.749 Test: blockdev nvme passthru rw ...passed 00:11:03.749 Test: blockdev nvme passthru vendor specific ...passed[2024-05-15 12:32:12.508850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.749 [2024-05-15 12:32:12.508884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.749 00:11:03.749 Test: blockdev nvme admin passthru ...passed 00:11:03.749 Test: blockdev copy ...passed 00:11:03.749 Suite: bdevio tests on: Nvme0n1p2 00:11:03.749 Test: blockdev write read block ...passed 00:11:03.749 Test: blockdev write zeroes read block ...passed 00:11:03.749 Test: blockdev write zeroes read no split ...passed 00:11:03.749 Test: blockdev write zeroes read split ...passed 00:11:03.749 Test: blockdev write zeroes read split partial ...passed 00:11:03.749 Test: blockdev reset ...[2024-05-15 12:32:12.576170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:03.749 [2024-05-15 12:32:12.579619] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.749 passed 00:11:03.749 Test: blockdev write read 8 blocks ...passed 00:11:03.749 Test: blockdev write read size > 128k ...passed 00:11:03.749 Test: blockdev write read invalid size ...passed 00:11:03.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.749 Test: blockdev write read max offset ...passed 00:11:03.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.749 Test: blockdev writev readv 8 blocks ...passed 00:11:03.749 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.749 Test: blockdev writev readv block ...passed 00:11:03.749 Test: blockdev writev readv size > 128k ...passed 00:11:03.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.749 Test: blockdev comparev and writev ...passed 00:11:03.749 Test: blockdev nvme passthru rw ...passed 00:11:03.749 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.749 Test: blockdev nvme admin passthru ...passed 00:11:03.749 Test: blockdev copy ...[2024-05-15 12:32:12.586987] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:11:03.749 separate metadata which is not supported yet. 00:11:03.749 passed 00:11:03.749 Suite: bdevio tests on: Nvme0n1p1 00:11:03.749 Test: blockdev write read block ...passed 00:11:03.749 Test: blockdev write zeroes read block ...passed 00:11:03.749 Test: blockdev write zeroes read no split ...passed 00:11:03.749 Test: blockdev write zeroes read split ...passed 00:11:03.749 Test: blockdev write zeroes read split partial ...passed 00:11:03.749 Test: blockdev reset ...[2024-05-15 12:32:12.641925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:03.749 [2024-05-15 12:32:12.645451] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.749 passed 00:11:03.749 Test: blockdev write read 8 blocks ...passed 00:11:03.749 Test: blockdev write read size > 128k ...passed 00:11:03.749 Test: blockdev write read invalid size ...passed 00:11:03.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.749 Test: blockdev write read max offset ...passed 00:11:03.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.749 Test: blockdev writev readv 8 blocks ...passed 00:11:03.749 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.749 Test: blockdev writev readv block ...passed 00:11:03.749 Test: blockdev writev readv size > 128k ...passed 00:11:03.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.749 Test: blockdev comparev and writev ...passed 00:11:03.749 Test: blockdev nvme passthru rw ...passed 00:11:03.749 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.749 Test: blockdev nvme admin passthru ...passed 00:11:03.749 Test: blockdev copy ...[2024-05-15 12:32:12.654035] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:11:03.749 separate metadata which is not supported yet. 00:11:03.749 passed 00:11:03.749 00:11:03.749 Run Summary: Type Total Ran Passed Failed Inactive 00:11:03.749 suites 7 7 n/a 0 0 00:11:03.749 tests 161 161 161 0 0 00:11:03.749 asserts 1006 1006 1006 0 n/a 00:11:03.749 00:11:03.749 Elapsed time = 1.435 seconds 00:11:03.749 0 00:11:03.749 12:32:12 -- bdev/blockdev.sh@293 -- # killprocess 63499 00:11:03.749 12:32:12 -- common/autotest_common.sh@926 -- # '[' -z 63499 ']' 00:11:03.749 12:32:12 -- common/autotest_common.sh@930 -- # kill -0 63499 00:11:03.749 12:32:12 -- common/autotest_common.sh@931 -- # uname 00:11:03.749 12:32:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.749 12:32:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63499 00:11:03.749 12:32:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:03.749 12:32:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:03.749 12:32:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63499' 00:11:03.749 killing process with pid 63499 00:11:03.749 12:32:12 -- common/autotest_common.sh@945 -- # kill 63499 00:11:03.749 12:32:12 -- common/autotest_common.sh@950 -- # wait 63499 00:11:04.698 12:32:13 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:04.698 00:11:04.698 real 0m3.439s 00:11:04.698 user 0m8.990s 00:11:04.698 sys 0m0.446s 00:11:04.698 12:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.698 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.698 ************************************ 00:11:04.698 END TEST bdev_bounds 00:11:04.698 ************************************ 00:11:04.956 12:32:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:04.956 12:32:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:04.956 12:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.956 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 ************************************ 00:11:04.956 START TEST bdev_nbd 00:11:04.956 ************************************ 00:11:04.956 12:32:13 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:04.956 12:32:13 -- bdev/blockdev.sh@298 -- # uname -s 00:11:04.956 12:32:13 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:04.956 12:32:13 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.956 12:32:13 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:04.956 12:32:13 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:04.956 12:32:13 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:04.956 12:32:13 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:11:04.956 12:32:13 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:04.956 12:32:13 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:04.956 12:32:13 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:04.956 12:32:13 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:11:04.956 12:32:13 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:04.956 12:32:13 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:04.956 12:32:13 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:04.956 12:32:13 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:04.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:04.956 12:32:13 -- bdev/blockdev.sh@316 -- # nbd_pid=63568 00:11:04.956 12:32:13 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:04.956 12:32:13 -- bdev/blockdev.sh@318 -- # waitforlisten 63568 /var/tmp/spdk-nbd.sock 00:11:04.956 12:32:13 -- common/autotest_common.sh@819 -- # '[' -z 63568 ']' 00:11:04.956 12:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:04.956 12:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:04.956 12:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:04.957 12:32:13 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:04.957 12:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:04.957 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.957 [2024-05-15 12:32:13.819458] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:04.957 [2024-05-15 12:32:13.819630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.216 [2024-05-15 12:32:13.986863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.216 [2024-05-15 12:32:14.217166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.591 12:32:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:06.591 12:32:15 -- common/autotest_common.sh@852 -- # return 0 00:11:06.591 12:32:15 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@24 -- # local i 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:06.591 12:32:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:06.850 12:32:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:06.850 12:32:15 -- common/autotest_common.sh@857 -- # local i 00:11:06.850 12:32:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:06.850 12:32:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:06.850 12:32:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:06.850 12:32:15 -- common/autotest_common.sh@861 -- # break 00:11:06.850 12:32:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:06.850 12:32:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:06.850 12:32:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.850 1+0 records in 00:11:06.850 1+0 records out 00:11:06.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429814 s, 9.5 MB/s 00:11:06.850 12:32:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.850 12:32:15 -- common/autotest_common.sh@874 -- # size=4096 00:11:06.850 12:32:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.850 12:32:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:06.850 12:32:15 -- common/autotest_common.sh@877 -- # return 0 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:06.850 12:32:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:07.109 12:32:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:07.109 12:32:16 -- common/autotest_common.sh@857 -- # local i 00:11:07.109 12:32:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.109 12:32:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.109 12:32:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:07.109 12:32:16 -- common/autotest_common.sh@861 -- # break 00:11:07.109 12:32:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.109 12:32:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.109 12:32:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.109 1+0 records in 00:11:07.109 1+0 records out 00:11:07.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546862 s, 7.5 MB/s 00:11:07.109 12:32:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.109 12:32:16 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.109 12:32:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.109 12:32:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.109 12:32:16 -- common/autotest_common.sh@877 -- # return 0 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.109 12:32:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:07.368 12:32:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:07.368 12:32:16 -- common/autotest_common.sh@857 -- # local i 00:11:07.368 12:32:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.368 12:32:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.368 12:32:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:07.368 12:32:16 -- common/autotest_common.sh@861 -- # break 00:11:07.368 12:32:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.368 12:32:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.368 12:32:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.368 1+0 records in 00:11:07.368 1+0 records out 00:11:07.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527101 s, 7.8 MB/s 00:11:07.368 12:32:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.368 12:32:16 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.368 12:32:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.368 12:32:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.368 12:32:16 -- common/autotest_common.sh@877 -- # return 0 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.368 12:32:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:07.626 12:32:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:07.626 12:32:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:07.626 12:32:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:07.626 12:32:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:07.626 12:32:16 -- common/autotest_common.sh@857 -- # local i 00:11:07.626 12:32:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.626 12:32:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.626 12:32:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:07.626 12:32:16 -- common/autotest_common.sh@861 -- # break 00:11:07.626 12:32:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.626 12:32:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.626 12:32:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.626 1+0 records in 00:11:07.626 1+0 records out 00:11:07.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597313 s, 6.9 MB/s 00:11:07.627 12:32:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.627 12:32:16 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.627 12:32:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.627 12:32:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.627 12:32:16 -- common/autotest_common.sh@877 -- # return 0 00:11:07.627 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.627 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.627 12:32:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:07.885 12:32:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:07.885 12:32:16 -- common/autotest_common.sh@857 -- # local i 00:11:07.885 12:32:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.885 12:32:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.885 12:32:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:07.885 12:32:16 -- common/autotest_common.sh@861 -- # break 00:11:07.885 12:32:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.885 12:32:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.885 12:32:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.885 1+0 records in 00:11:07.885 1+0 records out 00:11:07.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625634 s, 6.5 MB/s 00:11:07.885 12:32:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.885 12:32:16 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.885 12:32:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.885 12:32:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.885 12:32:16 -- common/autotest_common.sh@877 -- # return 0 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.885 12:32:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:08.144 12:32:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:08.144 12:32:17 -- common/autotest_common.sh@857 -- # local i 00:11:08.144 12:32:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:08.144 12:32:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:08.144 12:32:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:08.144 12:32:17 -- common/autotest_common.sh@861 -- # break 00:11:08.144 12:32:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:08.144 12:32:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:08.144 12:32:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.144 1+0 records in 00:11:08.144 1+0 records out 00:11:08.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816217 s, 5.0 MB/s 00:11:08.144 12:32:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.144 12:32:17 -- common/autotest_common.sh@874 -- # size=4096 00:11:08.144 12:32:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.144 12:32:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:08.144 12:32:17 -- common/autotest_common.sh@877 -- # return 0 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.144 12:32:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:08.403 12:32:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:08.403 12:32:17 -- common/autotest_common.sh@857 -- # local i 00:11:08.403 12:32:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:08.403 12:32:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:08.403 12:32:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:08.403 12:32:17 -- common/autotest_common.sh@861 -- # break 00:11:08.403 12:32:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:08.403 12:32:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:08.403 12:32:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.403 1+0 records in 00:11:08.403 1+0 records out 00:11:08.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779043 s, 5.3 MB/s 00:11:08.403 12:32:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.403 12:32:17 -- common/autotest_common.sh@874 -- # size=4096 00:11:08.403 12:32:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.403 12:32:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:08.403 12:32:17 -- common/autotest_common.sh@877 -- # return 0 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.403 12:32:17 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:08.663 12:32:17 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd0", 00:11:08.663 "bdev_name": "Nvme0n1p1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd1", 00:11:08.663 "bdev_name": "Nvme0n1p2" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd2", 00:11:08.663 "bdev_name": "Nvme1n1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd3", 00:11:08.663 "bdev_name": "Nvme2n1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd4", 00:11:08.663 "bdev_name": "Nvme2n2" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd5", 00:11:08.663 "bdev_name": "Nvme2n3" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd6", 00:11:08.663 "bdev_name": "Nvme3n1" 00:11:08.663 } 00:11:08.663 ]' 00:11:08.663 12:32:17 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:08.663 12:32:17 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:08.663 12:32:17 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd0", 00:11:08.663 "bdev_name": "Nvme0n1p1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd1", 00:11:08.663 "bdev_name": "Nvme0n1p2" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd2", 00:11:08.663 "bdev_name": "Nvme1n1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd3", 00:11:08.663 "bdev_name": "Nvme2n1" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd4", 00:11:08.663 "bdev_name": "Nvme2n2" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd5", 00:11:08.663 "bdev_name": "Nvme2n3" 00:11:08.663 }, 00:11:08.663 { 00:11:08.663 "nbd_device": "/dev/nbd6", 00:11:08.663 "bdev_name": "Nvme3n1" 00:11:08.663 } 00:11:08.663 ]' 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@51 -- # local i 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.922 12:32:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@41 -- # break 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.180 12:32:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@41 -- # break 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.439 12:32:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@41 -- # break 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.712 12:32:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:09.975 12:32:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:09.975 12:32:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:09.975 12:32:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@41 -- # break 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.976 12:32:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@41 -- # break 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.235 12:32:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@41 -- # break 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.494 12:32:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@41 -- # break 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:10.754 12:32:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@65 -- # true 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@65 -- # count=0 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@122 -- # count=0 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@127 -- # return 0 00:11:11.014 12:32:19 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@12 -- # local i 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.014 12:32:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:11:11.273 /dev/nbd0 00:11:11.273 12:32:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.273 12:32:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.273 12:32:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:11.273 12:32:20 -- common/autotest_common.sh@857 -- # local i 00:11:11.273 12:32:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:11.273 12:32:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:11.273 12:32:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:11.273 12:32:20 -- common/autotest_common.sh@861 -- # break 00:11:11.273 12:32:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:11.273 12:32:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:11.273 12:32:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.273 1+0 records in 00:11:11.273 1+0 records out 00:11:11.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461073 s, 8.9 MB/s 00:11:11.273 12:32:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.273 12:32:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:11.273 12:32:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.273 12:32:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:11.273 12:32:20 -- common/autotest_common.sh@877 -- # return 0 00:11:11.273 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.273 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.273 12:32:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:11:11.533 /dev/nbd1 00:11:11.533 12:32:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:11.533 12:32:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:11.533 12:32:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:11.533 12:32:20 -- common/autotest_common.sh@857 -- # local i 00:11:11.533 12:32:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:11.533 12:32:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:11.533 12:32:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:11.533 12:32:20 -- common/autotest_common.sh@861 -- # break 00:11:11.533 12:32:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:11.533 12:32:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:11.533 12:32:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.533 1+0 records in 00:11:11.533 1+0 records out 00:11:11.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574805 s, 7.1 MB/s 00:11:11.533 12:32:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.533 12:32:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:11.533 12:32:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.533 12:32:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:11.533 12:32:20 -- common/autotest_common.sh@877 -- # return 0 00:11:11.533 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.533 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.533 12:32:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:11:11.792 /dev/nbd10 00:11:11.792 12:32:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:11.792 12:32:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:11.792 12:32:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:11.792 12:32:20 -- common/autotest_common.sh@857 -- # local i 00:11:11.792 12:32:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:11.792 12:32:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:11.792 12:32:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:11.792 12:32:20 -- common/autotest_common.sh@861 -- # break 00:11:11.792 12:32:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:11.792 12:32:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:11.792 12:32:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.792 1+0 records in 00:11:11.792 1+0 records out 00:11:11.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632649 s, 6.5 MB/s 00:11:11.792 12:32:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.792 12:32:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:11.792 12:32:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.792 12:32:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:11.792 12:32:20 -- common/autotest_common.sh@877 -- # return 0 00:11:11.792 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.792 12:32:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.792 12:32:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:12.051 /dev/nbd11 00:11:12.051 12:32:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:12.051 12:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:12.051 12:32:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:12.051 12:32:21 -- common/autotest_common.sh@857 -- # local i 00:11:12.051 12:32:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:12.051 12:32:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:12.051 12:32:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:12.051 12:32:21 -- common/autotest_common.sh@861 -- # break 00:11:12.051 12:32:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:12.051 12:32:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:12.051 12:32:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.051 1+0 records in 00:11:12.051 1+0 records out 00:11:12.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904585 s, 4.5 MB/s 00:11:12.051 12:32:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.051 12:32:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:12.051 12:32:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.051 12:32:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:12.051 12:32:21 -- common/autotest_common.sh@877 -- # return 0 00:11:12.051 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.051 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.051 12:32:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:12.309 /dev/nbd12 00:11:12.309 12:32:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:12.309 12:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:12.309 12:32:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:12.309 12:32:21 -- common/autotest_common.sh@857 -- # local i 00:11:12.309 12:32:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:12.309 12:32:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:12.309 12:32:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:12.309 12:32:21 -- common/autotest_common.sh@861 -- # break 00:11:12.309 12:32:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:12.309 12:32:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:12.309 12:32:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.309 1+0 records in 00:11:12.309 1+0 records out 00:11:12.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068125 s, 6.0 MB/s 00:11:12.309 12:32:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.309 12:32:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:12.309 12:32:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.309 12:32:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:12.309 12:32:21 -- common/autotest_common.sh@877 -- # return 0 00:11:12.309 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.309 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.309 12:32:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:12.569 /dev/nbd13 00:11:12.569 12:32:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:12.569 12:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:12.569 12:32:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:12.569 12:32:21 -- common/autotest_common.sh@857 -- # local i 00:11:12.569 12:32:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:12.569 12:32:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:12.569 12:32:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:12.569 12:32:21 -- common/autotest_common.sh@861 -- # break 00:11:12.569 12:32:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:12.569 12:32:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:12.569 12:32:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.569 1+0 records in 00:11:12.569 1+0 records out 00:11:12.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847185 s, 4.8 MB/s 00:11:12.569 12:32:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.569 12:32:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:12.569 12:32:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.569 12:32:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:12.569 12:32:21 -- common/autotest_common.sh@877 -- # return 0 00:11:12.569 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.569 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.569 12:32:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:12.827 /dev/nbd14 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:12.827 12:32:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:12.827 12:32:21 -- common/autotest_common.sh@857 -- # local i 00:11:12.827 12:32:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:12.827 12:32:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:12.827 12:32:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:12.827 12:32:21 -- common/autotest_common.sh@861 -- # break 00:11:12.827 12:32:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:12.827 12:32:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:12.827 12:32:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.827 1+0 records in 00:11:12.827 1+0 records out 00:11:12.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911224 s, 4.5 MB/s 00:11:12.827 12:32:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.827 12:32:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:12.827 12:32:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.827 12:32:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:12.827 12:32:21 -- common/autotest_common.sh@877 -- # return 0 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.827 12:32:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd0", 00:11:13.394 "bdev_name": "Nvme0n1p1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd1", 00:11:13.394 "bdev_name": "Nvme0n1p2" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd10", 00:11:13.394 "bdev_name": "Nvme1n1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd11", 00:11:13.394 "bdev_name": "Nvme2n1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd12", 00:11:13.394 "bdev_name": "Nvme2n2" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd13", 00:11:13.394 "bdev_name": "Nvme2n3" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd14", 00:11:13.394 "bdev_name": "Nvme3n1" 00:11:13.394 } 00:11:13.394 ]' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd0", 00:11:13.394 "bdev_name": "Nvme0n1p1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd1", 00:11:13.394 "bdev_name": "Nvme0n1p2" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd10", 00:11:13.394 "bdev_name": "Nvme1n1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd11", 00:11:13.394 "bdev_name": "Nvme2n1" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd12", 00:11:13.394 "bdev_name": "Nvme2n2" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd13", 00:11:13.394 "bdev_name": "Nvme2n3" 00:11:13.394 }, 00:11:13.394 { 00:11:13.394 "nbd_device": "/dev/nbd14", 00:11:13.394 "bdev_name": "Nvme3n1" 00:11:13.394 } 00:11:13.394 ]' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:13.394 /dev/nbd1 00:11:13.394 /dev/nbd10 00:11:13.394 /dev/nbd11 00:11:13.394 /dev/nbd12 00:11:13.394 /dev/nbd13 00:11:13.394 /dev/nbd14' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:13.394 /dev/nbd1 00:11:13.394 /dev/nbd10 00:11:13.394 /dev/nbd11 00:11:13.394 /dev/nbd12 00:11:13.394 /dev/nbd13 00:11:13.394 /dev/nbd14' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@65 -- # count=7 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@66 -- # echo 7 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@95 -- # count=7 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:13.394 256+0 records in 00:11:13.394 256+0 records out 00:11:13.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771821 s, 136 MB/s 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:13.394 256+0 records in 00:11:13.394 256+0 records out 00:11:13.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169088 s, 6.2 MB/s 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.394 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:13.716 256+0 records in 00:11:13.716 256+0 records out 00:11:13.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159265 s, 6.6 MB/s 00:11:13.716 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.716 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:13.716 256+0 records in 00:11:13.716 256+0 records out 00:11:13.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154392 s, 6.8 MB/s 00:11:13.716 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.716 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:13.991 256+0 records in 00:11:13.991 256+0 records out 00:11:13.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156731 s, 6.7 MB/s 00:11:13.991 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.991 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:13.991 256+0 records in 00:11:13.991 256+0 records out 00:11:13.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163458 s, 6.4 MB/s 00:11:13.991 12:32:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.991 12:32:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:14.275 256+0 records in 00:11:14.275 256+0 records out 00:11:14.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15913 s, 6.6 MB/s 00:11:14.275 12:32:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.275 12:32:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:14.543 256+0 records in 00:11:14.543 256+0 records out 00:11:14.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155992 s, 6.7 MB/s 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:14.543 12:32:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@51 -- # local i 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.544 12:32:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@41 -- # break 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.802 12:32:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@41 -- # break 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.060 12:32:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@41 -- # break 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.318 12:32:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@41 -- # break 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.577 12:32:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@41 -- # break 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.835 12:32:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@41 -- # break 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.094 12:32:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@41 -- # break 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.352 12:32:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@65 -- # true 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@65 -- # count=0 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@104 -- # count=0 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@109 -- # return 0 00:11:16.611 12:32:25 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:16.611 12:32:25 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:16.869 malloc_lvol_verify 00:11:16.869 12:32:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:17.127 8602a5bd-4b39-451e-a882-9a4176d8365d 00:11:17.127 12:32:26 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:17.385 9990841c-2969-4778-9357-00111674e0b4 00:11:17.385 12:32:26 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:17.643 /dev/nbd0 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:17.643 mke2fs 1.46.5 (30-Dec-2021) 00:11:17.643 Discarding device blocks: 0/4096 done 00:11:17.643 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:17.643 00:11:17.643 Allocating group tables: 0/1 done 00:11:17.643 Writing inode tables: 0/1 done 00:11:17.643 Creating journal (1024 blocks): done 00:11:17.643 Writing superblocks and filesystem accounting information: 0/1 done 00:11:17.643 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@51 -- # local i 00:11:17.643 12:32:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.644 12:32:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@41 -- # break 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:17.902 12:32:26 -- bdev/nbd_common.sh@147 -- # return 0 00:11:17.902 12:32:26 -- bdev/blockdev.sh@324 -- # killprocess 63568 00:11:17.902 12:32:26 -- common/autotest_common.sh@926 -- # '[' -z 63568 ']' 00:11:17.902 12:32:26 -- common/autotest_common.sh@930 -- # kill -0 63568 00:11:17.902 12:32:26 -- common/autotest_common.sh@931 -- # uname 00:11:17.902 12:32:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:17.902 12:32:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63568 00:11:18.160 12:32:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:18.160 killing process with pid 63568 00:11:18.160 12:32:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:18.160 12:32:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63568' 00:11:18.160 12:32:26 -- common/autotest_common.sh@945 -- # kill 63568 00:11:18.160 12:32:26 -- common/autotest_common.sh@950 -- # wait 63568 00:11:19.535 12:32:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:19.535 00:11:19.535 real 0m14.418s 00:11:19.535 user 0m20.197s 00:11:19.535 sys 0m4.550s 00:11:19.535 12:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.535 ************************************ 00:11:19.535 END TEST bdev_nbd 00:11:19.535 ************************************ 00:11:19.535 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:11:19.535 12:32:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:19.535 12:32:28 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:11:19.535 12:32:28 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:11:19.535 skipping fio tests on NVMe due to multi-ns failures. 00:11:19.535 12:32:28 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:19.535 12:32:28 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:19.535 12:32:28 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:19.535 12:32:28 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:19.535 12:32:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.535 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:11:19.535 ************************************ 00:11:19.535 START TEST bdev_verify 00:11:19.535 ************************************ 00:11:19.535 12:32:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:19.535 [2024-05-15 12:32:28.292838] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:19.535 [2024-05-15 12:32:28.292994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64022 ] 00:11:19.535 [2024-05-15 12:32:28.458400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:19.799 [2024-05-15 12:32:28.701819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.799 [2024-05-15 12:32:28.701847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.731 Running I/O for 5 seconds... 00:11:26.026 00:11:26.026 Latency(us) 00:11:26.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.026 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x5e800 00:11:26.026 Nvme0n1p1 : 5.05 2322.33 9.07 0.00 0.00 54942.52 7387.69 59101.56 00:11:26.026 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x5e800 length 0x5e800 00:11:26.026 Nvme0n1p1 : 5.06 2333.20 9.11 0.00 0.00 54476.22 4319.42 47900.86 00:11:26.026 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x5e7ff 00:11:26.026 Nvme0n1p2 : 5.05 2321.62 9.07 0.00 0.00 54893.13 7685.59 55288.55 00:11:26.026 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:11:26.026 Nvme0n1p2 : 5.06 2332.05 9.11 0.00 0.00 54440.94 5749.29 46232.67 00:11:26.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0xa0000 00:11:26.026 Nvme1n1 : 5.05 2326.78 9.09 0.00 0.00 54762.98 4051.32 52428.80 00:11:26.026 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0xa0000 length 0xa0000 00:11:26.026 Nvme1n1 : 5.05 2329.98 9.10 0.00 0.00 54772.05 7745.16 56241.80 00:11:26.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x80000 00:11:26.026 Nvme2n1 : 5.06 2326.13 9.09 0.00 0.00 54722.23 4617.31 50283.99 00:11:26.026 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x80000 length 0x80000 00:11:26.026 Nvme2n1 : 5.05 2328.80 9.10 0.00 0.00 54745.27 8698.41 55050.24 00:11:26.026 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x80000 00:11:26.026 Nvme2n2 : 5.06 2325.49 9.08 0.00 0.00 54683.37 4974.78 48377.48 00:11:26.026 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x80000 length 0x80000 00:11:26.026 Nvme2n2 : 5.05 2328.13 9.09 0.00 0.00 54698.41 8996.31 53382.05 00:11:26.026 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x80000 00:11:26.026 Nvme2n3 : 5.06 2324.26 9.08 0.00 0.00 54654.04 6762.12 48377.48 00:11:26.026 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x80000 length 0x80000 00:11:26.026 Nvme2n3 : 5.05 2327.49 9.09 0.00 0.00 54658.64 9234.62 51237.24 00:11:26.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x0 length 0x20000 00:11:26.026 Nvme3n1 : 5.06 2323.16 9.07 0.00 0.00 54602.52 8340.95 48377.48 00:11:26.026 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.026 Verification LBA range: start 0x20000 length 0x20000 00:11:26.026 Nvme3n1 : 5.06 2334.49 9.12 0.00 0.00 54512.53 2606.55 49330.73 00:11:26.026 =================================================================================================================== 00:11:26.026 Total : 32583.91 127.28 0.00 0.00 54682.91 2606.55 59101.56 00:11:27.955 00:11:27.955 real 0m8.701s 00:11:27.955 user 0m15.975s 00:11:27.955 sys 0m0.322s 00:11:27.955 12:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.955 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.955 ************************************ 00:11:27.955 END TEST bdev_verify 00:11:27.955 ************************************ 00:11:27.956 12:32:36 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:27.956 12:32:36 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:27.956 12:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.956 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.956 ************************************ 00:11:27.956 START TEST bdev_verify_big_io 00:11:27.956 ************************************ 00:11:27.956 12:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:28.214 [2024-05-15 12:32:37.069745] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:28.214 [2024-05-15 12:32:37.069929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64131 ] 00:11:28.472 [2024-05-15 12:32:37.245205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.730 [2024-05-15 12:32:37.492425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.730 [2024-05-15 12:32:37.492426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.664 Running I/O for 5 seconds... 00:11:34.931 00:11:34.931 Latency(us) 00:11:34.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.931 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x5e80 00:11:34.931 Nvme0n1p1 : 5.47 206.21 12.89 0.00 0.00 610001.40 11141.12 800730.76 00:11:34.931 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x5e80 length 0x5e80 00:11:34.931 Nvme0n1p1 : 5.49 214.03 13.38 0.00 0.00 591518.43 11260.28 789291.75 00:11:34.931 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x5e7f 00:11:34.931 Nvme0n1p2 : 5.48 206.12 12.88 0.00 0.00 602889.48 11736.90 732096.70 00:11:34.931 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x5e7f length 0x5e7f 00:11:34.931 Nvme0n1p2 : 5.49 213.91 13.37 0.00 0.00 582736.96 12273.11 716844.68 00:11:34.931 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0xa000 00:11:34.931 Nvme1n1 : 5.48 206.03 12.88 0.00 0.00 594933.77 12749.73 705405.67 00:11:34.931 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0xa000 length 0xa000 00:11:34.931 Nvme1n1 : 5.50 213.82 13.36 0.00 0.00 575126.87 13345.51 648210.62 00:11:34.931 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x8000 00:11:34.931 Nvme2n1 : 5.49 214.05 13.38 0.00 0.00 569309.08 6285.50 697779.67 00:11:34.931 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x8000 length 0x8000 00:11:34.931 Nvme2n1 : 5.50 213.73 13.36 0.00 0.00 567326.97 13822.14 594828.57 00:11:34.931 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x8000 00:11:34.931 Nvme2n2 : 5.49 213.95 13.37 0.00 0.00 561323.75 7268.54 629145.60 00:11:34.931 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x8000 length 0x8000 00:11:34.931 Nvme2n2 : 5.50 213.63 13.35 0.00 0.00 559587.41 14715.81 606267.58 00:11:34.931 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x8000 00:11:34.931 Nvme2n3 : 5.49 213.86 13.37 0.00 0.00 553613.12 7745.16 632958.60 00:11:34.931 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x8000 length 0x8000 00:11:34.931 Nvme2n3 : 5.51 220.50 13.78 0.00 0.00 535916.83 2502.28 610080.58 00:11:34.931 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x0 length 0x2000 00:11:34.931 Nvme3n1 : 5.50 213.77 13.36 0.00 0.00 545597.90 8340.95 1014258.97 00:11:34.931 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:34.931 Verification LBA range: start 0x2000 length 0x2000 00:11:34.932 Nvme3n1 : 5.51 220.42 13.78 0.00 0.00 528362.53 2993.80 846486.81 00:11:34.932 =================================================================================================================== 00:11:34.932 Total : 2984.05 186.50 0.00 0.00 569419.42 2502.28 1014258.97 00:11:36.835 00:11:36.835 real 0m8.721s 00:11:36.835 user 0m15.911s 00:11:36.835 sys 0m0.381s 00:11:36.835 12:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.835 ************************************ 00:11:36.835 END TEST bdev_verify_big_io 00:11:36.835 ************************************ 00:11:36.835 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.835 12:32:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.835 12:32:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:36.835 12:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:36.835 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.835 ************************************ 00:11:36.835 START TEST bdev_write_zeroes 00:11:36.835 ************************************ 00:11:36.835 12:32:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.835 [2024-05-15 12:32:45.813884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:36.835 [2024-05-15 12:32:45.814053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64240 ] 00:11:37.093 [2024-05-15 12:32:45.979349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.352 [2024-05-15 12:32:46.223865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.918 Running I/O for 1 seconds... 00:11:39.291 00:11:39.291 Latency(us) 00:11:39.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.291 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme0n1p1 : 1.02 6887.82 26.91 0.00 0.00 18502.80 8638.84 34078.72 00:11:39.291 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme0n1p2 : 1.02 6875.26 26.86 0.00 0.00 18496.58 14596.65 26095.24 00:11:39.291 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme1n1 : 1.03 6864.01 26.81 0.00 0.00 18467.53 8757.99 30742.34 00:11:39.291 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme2n1 : 1.03 6853.26 26.77 0.00 0.00 18397.00 10724.07 25022.84 00:11:39.291 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme2n2 : 1.03 6893.34 26.93 0.00 0.00 18321.76 9353.77 25261.15 00:11:39.291 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme2n3 : 1.03 6882.47 26.88 0.00 0.00 18307.82 8579.26 25261.15 00:11:39.291 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.291 Nvme3n1 : 1.03 6872.04 26.84 0.00 0.00 18302.82 8281.37 25022.84 00:11:39.291 =================================================================================================================== 00:11:39.291 Total : 48128.20 188.00 0.00 0.00 18399.13 8281.37 34078.72 00:11:40.665 00:11:40.665 real 0m3.519s 00:11:40.665 user 0m3.107s 00:11:40.665 sys 0m0.285s 00:11:40.665 12:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.665 ************************************ 00:11:40.665 END TEST bdev_write_zeroes 00:11:40.665 ************************************ 00:11:40.665 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.665 12:32:49 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:40.665 12:32:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:40.665 12:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.665 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.665 ************************************ 00:11:40.665 START TEST bdev_json_nonenclosed 00:11:40.665 ************************************ 00:11:40.665 12:32:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:40.665 [2024-05-15 12:32:49.400887] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:40.665 [2024-05-15 12:32:49.401077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64299 ] 00:11:40.665 [2024-05-15 12:32:49.574744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.924 [2024-05-15 12:32:49.818657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.924 [2024-05-15 12:32:49.818858] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:40.924 [2024-05-15 12:32:49.818889] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:41.490 00:11:41.490 real 0m0.915s 00:11:41.490 user 0m0.662s 00:11:41.490 sys 0m0.145s 00:11:41.490 12:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.490 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.490 ************************************ 00:11:41.490 END TEST bdev_json_nonenclosed 00:11:41.490 ************************************ 00:11:41.490 12:32:50 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.490 12:32:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:41.490 12:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.490 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.490 ************************************ 00:11:41.490 START TEST bdev_json_nonarray 00:11:41.490 ************************************ 00:11:41.490 12:32:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.490 [2024-05-15 12:32:50.361399] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:41.490 [2024-05-15 12:32:50.361656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64330 ] 00:11:41.748 [2024-05-15 12:32:50.530707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.007 [2024-05-15 12:32:50.775796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.007 [2024-05-15 12:32:50.776035] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:42.007 [2024-05-15 12:32:50.776066] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:42.265 00:11:42.265 real 0m0.915s 00:11:42.265 user 0m0.650s 00:11:42.265 sys 0m0.157s 00:11:42.265 12:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.265 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.265 ************************************ 00:11:42.265 END TEST bdev_json_nonarray 00:11:42.265 ************************************ 00:11:42.265 12:32:51 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:11:42.265 12:32:51 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:11:42.265 12:32:51 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:42.265 12:32:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:42.265 12:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.265 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.265 ************************************ 00:11:42.265 START TEST bdev_gpt_uuid 00:11:42.265 ************************************ 00:11:42.265 12:32:51 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:11:42.265 12:32:51 -- bdev/blockdev.sh@612 -- # local bdev 00:11:42.265 12:32:51 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:11:42.265 12:32:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=64361 00:11:42.265 12:32:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:42.265 12:32:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:42.265 12:32:51 -- bdev/blockdev.sh@47 -- # waitforlisten 64361 00:11:42.265 12:32:51 -- common/autotest_common.sh@819 -- # '[' -z 64361 ']' 00:11:42.265 12:32:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.265 12:32:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.265 12:32:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.265 12:32:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.265 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.524 [2024-05-15 12:32:51.364842] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:42.524 [2024-05-15 12:32:51.365803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64361 ] 00:11:42.784 [2024-05-15 12:32:51.542243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.784 [2024-05-15 12:32:51.783857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.784 [2024-05-15 12:32:51.784108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.162 12:32:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.162 12:32:52 -- common/autotest_common.sh@852 -- # return 0 00:11:44.162 12:32:52 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:44.162 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.162 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.419 Some configs were skipped because the RPC state that can call them passed over. 00:11:44.419 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.419 12:32:53 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:11:44.419 12:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.419 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:11:44.419 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.419 12:32:53 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:44.419 12:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.419 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:11:44.419 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.419 12:32:53 -- bdev/blockdev.sh@619 -- # bdev='[ 00:11:44.419 { 00:11:44.419 "name": "Nvme0n1p1", 00:11:44.419 "aliases": [ 00:11:44.419 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:44.419 ], 00:11:44.419 "product_name": "GPT Disk", 00:11:44.419 "block_size": 4096, 00:11:44.419 "num_blocks": 774144, 00:11:44.419 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:44.419 "md_size": 64, 00:11:44.419 "md_interleave": false, 00:11:44.419 "dif_type": 0, 00:11:44.419 "assigned_rate_limits": { 00:11:44.419 "rw_ios_per_sec": 0, 00:11:44.419 "rw_mbytes_per_sec": 0, 00:11:44.419 "r_mbytes_per_sec": 0, 00:11:44.419 "w_mbytes_per_sec": 0 00:11:44.419 }, 00:11:44.419 "claimed": false, 00:11:44.419 "zoned": false, 00:11:44.419 "supported_io_types": { 00:11:44.419 "read": true, 00:11:44.419 "write": true, 00:11:44.419 "unmap": true, 00:11:44.419 "write_zeroes": true, 00:11:44.419 "flush": true, 00:11:44.419 "reset": true, 00:11:44.419 "compare": true, 00:11:44.419 "compare_and_write": false, 00:11:44.419 "abort": true, 00:11:44.419 "nvme_admin": false, 00:11:44.419 "nvme_io": false 00:11:44.419 }, 00:11:44.419 "driver_specific": { 00:11:44.419 "gpt": { 00:11:44.419 "base_bdev": "Nvme0n1", 00:11:44.419 "offset_blocks": 256, 00:11:44.419 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:44.419 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:44.419 "partition_name": "SPDK_TEST_first" 00:11:44.419 } 00:11:44.419 } 00:11:44.419 } 00:11:44.419 ]' 00:11:44.419 12:32:53 -- bdev/blockdev.sh@620 -- # jq -r length 00:11:44.419 12:32:53 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:11:44.419 12:32:53 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:11:44.419 12:32:53 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:44.419 12:32:53 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:44.677 12:32:53 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:44.677 12:32:53 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:44.677 12:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.677 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.677 12:32:53 -- bdev/blockdev.sh@624 -- # bdev='[ 00:11:44.677 { 00:11:44.677 "name": "Nvme0n1p2", 00:11:44.677 "aliases": [ 00:11:44.677 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:44.677 ], 00:11:44.677 "product_name": "GPT Disk", 00:11:44.677 "block_size": 4096, 00:11:44.677 "num_blocks": 774143, 00:11:44.677 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:44.677 "md_size": 64, 00:11:44.677 "md_interleave": false, 00:11:44.677 "dif_type": 0, 00:11:44.677 "assigned_rate_limits": { 00:11:44.677 "rw_ios_per_sec": 0, 00:11:44.677 "rw_mbytes_per_sec": 0, 00:11:44.677 "r_mbytes_per_sec": 0, 00:11:44.677 "w_mbytes_per_sec": 0 00:11:44.677 }, 00:11:44.677 "claimed": false, 00:11:44.677 "zoned": false, 00:11:44.677 "supported_io_types": { 00:11:44.677 "read": true, 00:11:44.677 "write": true, 00:11:44.677 "unmap": true, 00:11:44.677 "write_zeroes": true, 00:11:44.677 "flush": true, 00:11:44.677 "reset": true, 00:11:44.677 "compare": true, 00:11:44.677 "compare_and_write": false, 00:11:44.677 "abort": true, 00:11:44.677 "nvme_admin": false, 00:11:44.677 "nvme_io": false 00:11:44.677 }, 00:11:44.677 "driver_specific": { 00:11:44.677 "gpt": { 00:11:44.677 "base_bdev": "Nvme0n1", 00:11:44.677 "offset_blocks": 774400, 00:11:44.677 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:44.677 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:44.677 "partition_name": "SPDK_TEST_second" 00:11:44.677 } 00:11:44.677 } 00:11:44.677 } 00:11:44.677 ]' 00:11:44.677 12:32:53 -- bdev/blockdev.sh@625 -- # jq -r length 00:11:44.677 12:32:53 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:11:44.677 12:32:53 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:11:44.677 12:32:53 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:44.677 12:32:53 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:44.677 12:32:53 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:44.677 12:32:53 -- bdev/blockdev.sh@629 -- # killprocess 64361 00:11:44.677 12:32:53 -- common/autotest_common.sh@926 -- # '[' -z 64361 ']' 00:11:44.677 12:32:53 -- common/autotest_common.sh@930 -- # kill -0 64361 00:11:44.677 12:32:53 -- common/autotest_common.sh@931 -- # uname 00:11:44.677 12:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:44.677 12:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64361 00:11:44.677 12:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:44.677 12:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:44.677 killing process with pid 64361 00:11:44.677 12:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64361' 00:11:44.677 12:32:53 -- common/autotest_common.sh@945 -- # kill 64361 00:11:44.677 12:32:53 -- common/autotest_common.sh@950 -- # wait 64361 00:11:47.210 00:11:47.210 real 0m4.592s 00:11:47.210 user 0m4.883s 00:11:47.210 sys 0m0.615s 00:11:47.210 12:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.210 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.210 ************************************ 00:11:47.210 END TEST bdev_gpt_uuid 00:11:47.211 ************************************ 00:11:47.211 12:32:55 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:11:47.211 12:32:55 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:47.211 12:32:55 -- bdev/blockdev.sh@809 -- # cleanup 00:11:47.211 12:32:55 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:47.211 12:32:55 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:47.211 12:32:55 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:11:47.211 12:32:55 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:11:47.211 12:32:55 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:11:47.211 12:32:55 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:47.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.469 Waiting for block devices as requested 00:11:47.469 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.826 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.826 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.826 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.095 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:53.095 12:33:01 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:11:53.095 12:33:01 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:11:53.095 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:53.095 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:53.095 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:53.095 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:11:53.095 12:33:02 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:11:53.095 00:11:53.095 real 1m8.180s 00:11:53.095 user 1m27.833s 00:11:53.095 sys 0m10.518s 00:11:53.095 12:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.095 ************************************ 00:11:53.095 END TEST blockdev_nvme_gpt 00:11:53.095 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.095 ************************************ 00:11:53.095 12:33:02 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:53.095 12:33:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:53.095 12:33:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.095 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.095 ************************************ 00:11:53.095 START TEST nvme 00:11:53.095 ************************************ 00:11:53.095 12:33:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:53.352 * Looking for test storage... 00:11:53.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:53.352 12:33:02 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:54.322 lsblk: /dev/nvme0c0n1: not a block device 00:11:54.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:54.322 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.580 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.580 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.580 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.580 12:33:03 -- nvme/nvme.sh@79 -- # uname 00:11:54.580 12:33:03 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:54.580 12:33:03 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:54.580 12:33:03 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:54.580 12:33:03 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:54.580 12:33:03 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:11:54.580 12:33:03 -- common/autotest_common.sh@1045 -- # echo 0 00:11:54.580 12:33:03 -- common/autotest_common.sh@1047 -- # stubpid=65059 00:11:54.580 12:33:03 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:54.580 Waiting for stub to ready for secondary processes... 00:11:54.580 12:33:03 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:11:54.580 12:33:03 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:54.580 12:33:03 -- common/autotest_common.sh@1051 -- # [[ -e /proc/65059 ]] 00:11:54.580 12:33:03 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:54.580 [2024-05-15 12:33:03.556793] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:54.580 [2024-05-15 12:33:03.556978] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.524 12:33:04 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:55.524 12:33:04 -- common/autotest_common.sh@1051 -- # [[ -e /proc/65059 ]] 00:11:55.524 12:33:04 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:56.096 [2024-05-15 12:33:04.868668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:56.353 [2024-05-15 12:33:05.118539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.353 [2024-05-15 12:33:05.118639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.353 [2024-05-15 12:33:05.118662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.353 [2024-05-15 12:33:05.144234] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:56.353 [2024-05-15 12:33:05.156406] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:56.353 [2024-05-15 12:33:05.156674] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:56.353 [2024-05-15 12:33:05.168965] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:56.353 [2024-05-15 12:33:05.169201] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:56.353 [2024-05-15 12:33:05.169352] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:56.353 [2024-05-15 12:33:05.179209] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:56.353 [2024-05-15 12:33:05.179419] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:56.353 [2024-05-15 12:33:05.179584] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:56.353 [2024-05-15 12:33:05.189694] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:56.353 [2024-05-15 12:33:05.189961] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:56.353 [2024-05-15 12:33:05.190157] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:56.353 [2024-05-15 12:33:05.190307] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:56.353 [2024-05-15 12:33:05.190520] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:56.611 12:33:05 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:56.611 done. 00:11:56.611 12:33:05 -- common/autotest_common.sh@1054 -- # echo done. 00:11:56.611 12:33:05 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:56.611 12:33:05 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:56.611 12:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.611 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.611 ************************************ 00:11:56.611 START TEST nvme_reset 00:11:56.611 ************************************ 00:11:56.611 12:33:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:56.869 Initializing NVMe Controllers 00:11:56.869 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:56.869 Skipping QEMU NVMe SSD at 0000:00:07.0 00:11:56.869 Skipping QEMU NVMe SSD at 0000:00:09.0 00:11:56.869 Skipping QEMU NVMe SSD at 0000:00:08.0 00:11:56.869 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:56.869 00:11:56.869 real 0m0.289s 00:11:56.869 user 0m0.097s 00:11:56.869 sys 0m0.148s 00:11:56.869 12:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.869 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.869 ************************************ 00:11:56.869 END TEST nvme_reset 00:11:56.869 ************************************ 00:11:56.869 12:33:05 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:56.869 12:33:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:56.869 12:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.869 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.869 ************************************ 00:11:56.869 START TEST nvme_identify 00:11:56.869 ************************************ 00:11:56.869 12:33:05 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:56.869 12:33:05 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:56.869 12:33:05 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:56.869 12:33:05 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:56.869 12:33:05 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:56.869 12:33:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:56.869 12:33:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:56.869 12:33:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:56.869 12:33:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:56.869 12:33:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:57.127 12:33:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:57.127 12:33:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:57.127 12:33:05 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:57.401 [2024-05-15 12:33:06.185323] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 65101 terminated unexpected 00:11:57.401 ===================================================== 00:11:57.401 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:57.401 ===================================================== 00:11:57.401 Controller Capabilities/Features 00:11:57.401 ================================ 00:11:57.401 Vendor ID: 1b36 00:11:57.401 Subsystem Vendor ID: 1af4 00:11:57.401 Serial Number: 12340 00:11:57.401 Model Number: QEMU NVMe Ctrl 00:11:57.401 Firmware Version: 8.0.0 00:11:57.401 Recommended Arb Burst: 6 00:11:57.401 IEEE OUI Identifier: 00 54 52 00:11:57.402 Multi-path I/O 00:11:57.402 May have multiple subsystem ports: No 00:11:57.402 May have multiple controllers: No 00:11:57.402 Associated with SR-IOV VF: No 00:11:57.402 Max Data Transfer Size: 524288 00:11:57.402 Max Number of Namespaces: 256 00:11:57.402 Max Number of I/O Queues: 64 00:11:57.402 NVMe Specification Version (VS): 1.4 00:11:57.402 NVMe Specification Version (Identify): 1.4 00:11:57.402 Maximum Queue Entries: 2048 00:11:57.402 Contiguous Queues Required: Yes 00:11:57.402 Arbitration Mechanisms Supported 00:11:57.402 Weighted Round Robin: Not Supported 00:11:57.402 Vendor Specific: Not Supported 00:11:57.402 Reset Timeout: 7500 ms 00:11:57.402 Doorbell Stride: 4 bytes 00:11:57.402 NVM Subsystem Reset: Not Supported 00:11:57.402 Command Sets Supported 00:11:57.402 NVM Command Set: Supported 00:11:57.402 Boot Partition: Not Supported 00:11:57.402 Memory Page Size Minimum: 4096 bytes 00:11:57.402 Memory Page Size Maximum: 65536 bytes 00:11:57.402 Persistent Memory Region: Not Supported 00:11:57.402 Optional Asynchronous Events Supported 00:11:57.402 Namespace Attribute Notices: Supported 00:11:57.402 Firmware Activation Notices: Not Supported 00:11:57.402 ANA Change Notices: Not Supported 00:11:57.402 PLE Aggregate Log Change Notices: Not Supported 00:11:57.402 LBA Status Info Alert Notices: Not Supported 00:11:57.402 EGE Aggregate Log Change Notices: Not Supported 00:11:57.402 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.402 Zone Descriptor Change Notices: Not Supported 00:11:57.402 Discovery Log Change Notices: Not Supported 00:11:57.402 Controller Attributes 00:11:57.402 128-bit Host Identifier: Not Supported 00:11:57.402 Non-Operational Permissive Mode: Not Supported 00:11:57.402 NVM Sets: Not Supported 00:11:57.402 Read Recovery Levels: Not Supported 00:11:57.402 Endurance Groups: Not Supported 00:11:57.402 Predictable Latency Mode: Not Supported 00:11:57.402 Traffic Based Keep ALive: Not Supported 00:11:57.402 Namespace Granularity: Not Supported 00:11:57.402 SQ Associations: Not Supported 00:11:57.402 UUID List: Not Supported 00:11:57.402 Multi-Domain Subsystem: Not Supported 00:11:57.402 Fixed Capacity Management: Not Supported 00:11:57.402 Variable Capacity Management: Not Supported 00:11:57.402 Delete Endurance Group: Not Supported 00:11:57.402 Delete NVM Set: Not Supported 00:11:57.402 Extended LBA Formats Supported: Supported 00:11:57.402 Flexible Data Placement Supported: Not Supported 00:11:57.402 00:11:57.402 Controller Memory Buffer Support 00:11:57.402 ================================ 00:11:57.402 Supported: No 00:11:57.402 00:11:57.402 Persistent Memory Region Support 00:11:57.402 ================================ 00:11:57.402 Supported: No 00:11:57.402 00:11:57.402 Admin Command Set Attributes 00:11:57.402 ============================ 00:11:57.402 Security Send/Receive: Not Supported 00:11:57.402 Format NVM: Supported 00:11:57.402 Firmware Activate/Download: Not Supported 00:11:57.402 Namespace Management: Supported 00:11:57.402 Device Self-Test: Not Supported 00:11:57.402 Directives: Supported 00:11:57.402 NVMe-MI: Not Supported 00:11:57.402 Virtualization Management: Not Supported 00:11:57.402 Doorbell Buffer Config: Supported 00:11:57.402 Get LBA Status Capability: Not Supported 00:11:57.402 Command & Feature Lockdown Capability: Not Supported 00:11:57.402 Abort Command Limit: 4 00:11:57.402 Async Event Request Limit: 4 00:11:57.402 Number of Firmware Slots: N/A 00:11:57.402 Firmware Slot 1 Read-Only: N/A 00:11:57.402 Firmware Activation Without Reset: N/A 00:11:57.402 Multiple Update Detection Support: N/A 00:11:57.402 Firmware Update Granularity: No Information Provided 00:11:57.402 Per-Namespace SMART Log: Yes 00:11:57.402 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.402 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:57.402 Command Effects Log Page: Supported 00:11:57.402 Get Log Page Extended Data: Supported 00:11:57.402 Telemetry Log Pages: Not Supported 00:11:57.402 Persistent Event Log Pages: Not Supported 00:11:57.402 Supported Log Pages Log Page: May Support 00:11:57.402 Commands Supported & Effects Log Page: Not Supported 00:11:57.402 Feature Identifiers & Effects Log Page:May Support 00:11:57.402 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.402 Data Area 4 for Telemetry Log: Not Supported 00:11:57.402 Error Log Page Entries Supported: 1 00:11:57.402 Keep Alive: Not Supported 00:11:57.402 00:11:57.402 NVM Command Set Attributes 00:11:57.402 ========================== 00:11:57.402 Submission Queue Entry Size 00:11:57.402 Max: 64 00:11:57.402 Min: 64 00:11:57.402 Completion Queue Entry Size 00:11:57.402 Max: 16 00:11:57.402 Min: 16 00:11:57.402 Number of Namespaces: 256 00:11:57.402 Compare Command: Supported 00:11:57.402 Write Uncorrectable Command: Not Supported 00:11:57.402 Dataset Management Command: Supported 00:11:57.402 Write Zeroes Command: Supported 00:11:57.402 Set Features Save Field: Supported 00:11:57.402 Reservations: Not Supported 00:11:57.402 Timestamp: Supported 00:11:57.402 Copy: Supported 00:11:57.402 Volatile Write Cache: Present 00:11:57.402 Atomic Write Unit (Normal): 1 00:11:57.402 Atomic Write Unit (PFail): 1 00:11:57.402 Atomic Compare & Write Unit: 1 00:11:57.402 Fused Compare & Write: Not Supported 00:11:57.402 Scatter-Gather List 00:11:57.402 SGL Command Set: Supported 00:11:57.402 SGL Keyed: Not Supported 00:11:57.402 SGL Bit Bucket Descriptor: Not Supported 00:11:57.402 SGL Metadata Pointer: Not Supported 00:11:57.402 Oversized SGL: Not Supported 00:11:57.402 SGL Metadata Address: Not Supported 00:11:57.402 SGL Offset: Not Supported 00:11:57.402 Transport SGL Data Block: Not Supported 00:11:57.402 Replay Protected Memory Block: Not Supported 00:11:57.402 00:11:57.402 Firmware Slot Information 00:11:57.402 ========================= 00:11:57.402 Active slot: 1 00:11:57.402 Slot 1 Firmware Revision: 1.0 00:11:57.402 00:11:57.402 00:11:57.402 Commands Supported and Effects 00:11:57.402 ============================== 00:11:57.402 Admin Commands 00:11:57.402 -------------- 00:11:57.402 Delete I/O Submission Queue (00h): Supported 00:11:57.402 Create I/O Submission Queue (01h): Supported 00:11:57.402 Get Log Page (02h): Supported 00:11:57.402 Delete I/O Completion Queue (04h): Supported 00:11:57.402 Create I/O Completion Queue (05h): Supported 00:11:57.402 Identify (06h): Supported 00:11:57.402 Abort (08h): Supported 00:11:57.402 Set Features (09h): Supported 00:11:57.402 Get Features (0Ah): Supported 00:11:57.402 Asynchronous Event Request (0Ch): Supported 00:11:57.402 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.402 Directive Send (19h): Supported 00:11:57.402 Directive Receive (1Ah): Supported 00:11:57.402 Virtualization Management (1Ch): Supported 00:11:57.402 Doorbell Buffer Config (7Ch): Supported 00:11:57.402 Format NVM (80h): Supported LBA-Change 00:11:57.402 I/O Commands 00:11:57.402 ------------ 00:11:57.402 Flush (00h): Supported LBA-Change 00:11:57.402 Write (01h): Supported LBA-Change 00:11:57.402 Read (02h): Supported 00:11:57.402 Compare (05h): Supported 00:11:57.402 Write Zeroes (08h): Supported LBA-Change 00:11:57.402 Dataset Management (09h): Supported LBA-Change 00:11:57.402 Unknown (0Ch): Supported 00:11:57.402 Unknown (12h): Supported 00:11:57.402 Copy (19h): Supported LBA-Change 00:11:57.402 Unknown (1Dh): Supported LBA-Change 00:11:57.402 00:11:57.402 Error Log 00:11:57.402 ========= 00:11:57.402 00:11:57.402 Arbitration 00:11:57.402 =========== 00:11:57.402 Arbitration Burst: no limit 00:11:57.402 00:11:57.402 Power Management 00:11:57.402 ================ 00:11:57.402 Number of Power States: 1 00:11:57.402 Current Power State: Power State #0 00:11:57.402 Power State #0: 00:11:57.402 Max Power: 25.00 W 00:11:57.402 Non-Operational State: Operational 00:11:57.402 Entry Latency: 16 microseconds 00:11:57.402 Exit Latency: 4 microseconds 00:11:57.402 Relative Read Throughput: 0 00:11:57.402 Relative Read Latency: 0 00:11:57.402 Relative Write Throughput: 0 00:11:57.402 Relative Write Latency: 0 00:11:57.402 Idle Power[2024-05-15 12:33:06.187161] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 65101 terminated unexpected 00:11:57.402 : Not Reported 00:11:57.402 Active Power: Not Reported 00:11:57.402 Non-Operational Permissive Mode: Not Supported 00:11:57.402 00:11:57.402 Health Information 00:11:57.402 ================== 00:11:57.402 Critical Warnings: 00:11:57.402 Available Spare Space: OK 00:11:57.402 Temperature: OK 00:11:57.402 Device Reliability: OK 00:11:57.402 Read Only: No 00:11:57.402 Volatile Memory Backup: OK 00:11:57.402 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.402 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.402 Available Spare: 0% 00:11:57.402 Available Spare Threshold: 0% 00:11:57.402 Life Percentage Used: 0% 00:11:57.402 Data Units Read: 1753 00:11:57.402 Data Units Written: 804 00:11:57.402 Host Read Commands: 84438 00:11:57.403 Host Write Commands: 41811 00:11:57.403 Controller Busy Time: 0 minutes 00:11:57.403 Power Cycles: 0 00:11:57.403 Power On Hours: 0 hours 00:11:57.403 Unsafe Shutdowns: 0 00:11:57.403 Unrecoverable Media Errors: 0 00:11:57.403 Lifetime Error Log Entries: 0 00:11:57.403 Warning Temperature Time: 0 minutes 00:11:57.403 Critical Temperature Time: 0 minutes 00:11:57.403 00:11:57.403 Number of Queues 00:11:57.403 ================ 00:11:57.403 Number of I/O Submission Queues: 64 00:11:57.403 Number of I/O Completion Queues: 64 00:11:57.403 00:11:57.403 ZNS Specific Controller Data 00:11:57.403 ============================ 00:11:57.403 Zone Append Size Limit: 0 00:11:57.403 00:11:57.403 00:11:57.403 Active Namespaces 00:11:57.403 ================= 00:11:57.403 Namespace ID:1 00:11:57.403 Error Recovery Timeout: Unlimited 00:11:57.403 Command Set Identifier: NVM (00h) 00:11:57.403 Deallocate: Supported 00:11:57.403 Deallocated/Unwritten Error: Supported 00:11:57.403 Deallocated Read Value: All 0x00 00:11:57.403 Deallocate in Write Zeroes: Not Supported 00:11:57.403 Deallocated Guard Field: 0xFFFF 00:11:57.403 Flush: Supported 00:11:57.403 Reservation: Not Supported 00:11:57.403 Metadata Transferred as: Separate Metadata Buffer 00:11:57.403 Namespace Sharing Capabilities: Private 00:11:57.403 Size (in LBAs): 1548666 (5GiB) 00:11:57.403 Capacity (in LBAs): 1548666 (5GiB) 00:11:57.403 Utilization (in LBAs): 1548666 (5GiB) 00:11:57.403 Thin Provisioning: Not Supported 00:11:57.403 Per-NS Atomic Units: No 00:11:57.403 Maximum Single Source Range Length: 128 00:11:57.403 Maximum Copy Length: 128 00:11:57.403 Maximum Source Range Count: 128 00:11:57.403 NGUID/EUI64 Never Reused: No 00:11:57.403 Namespace Write Protected: No 00:11:57.403 Number of LBA Formats: 8 00:11:57.403 Current LBA Format: LBA Format #07 00:11:57.403 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.403 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.403 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.403 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.403 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.403 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.403 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.403 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.403 00:11:57.403 ===================================================== 00:11:57.403 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:57.403 ===================================================== 00:11:57.403 Controller Capabilities/Features 00:11:57.403 ================================ 00:11:57.403 Vendor ID: 1b36 00:11:57.403 Subsystem Vendor ID: 1af4 00:11:57.403 Serial Number: 12341 00:11:57.403 Model Number: QEMU NVMe Ctrl 00:11:57.403 Firmware Version: 8.0.0 00:11:57.403 Recommended Arb Burst: 6 00:11:57.403 IEEE OUI Identifier: 00 54 52 00:11:57.403 Multi-path I/O 00:11:57.403 May have multiple subsystem ports: No 00:11:57.403 May have multiple controllers: No 00:11:57.403 Associated with SR-IOV VF: No 00:11:57.403 Max Data Transfer Size: 524288 00:11:57.403 Max Number of Namespaces: 256 00:11:57.403 Max Number of I/O Queues: 64 00:11:57.403 NVMe Specification Version (VS): 1.4 00:11:57.403 NVMe Specification Version (Identify): 1.4 00:11:57.403 Maximum Queue Entries: 2048 00:11:57.403 Contiguous Queues Required: Yes 00:11:57.403 Arbitration Mechanisms Supported 00:11:57.403 Weighted Round Robin: Not Supported 00:11:57.403 Vendor Specific: Not Supported 00:11:57.403 Reset Timeout: 7500 ms 00:11:57.403 Doorbell Stride: 4 bytes 00:11:57.403 NVM Subsystem Reset: Not Supported 00:11:57.403 Command Sets Supported 00:11:57.403 NVM Command Set: Supported 00:11:57.403 Boot Partition: Not Supported 00:11:57.403 Memory Page Size Minimum: 4096 bytes 00:11:57.403 Memory Page Size Maximum: 65536 bytes 00:11:57.403 Persistent Memory Region: Not Supported 00:11:57.403 Optional Asynchronous Events Supported 00:11:57.403 Namespace Attribute Notices: Supported 00:11:57.403 Firmware Activation Notices: Not Supported 00:11:57.403 ANA Change Notices: Not Supported 00:11:57.403 PLE Aggregate Log Change Notices: Not Supported 00:11:57.403 LBA Status Info Alert Notices: Not Supported 00:11:57.403 EGE Aggregate Log Change Notices: Not Supported 00:11:57.403 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.403 Zone Descriptor Change Notices: Not Supported 00:11:57.403 Discovery Log Change Notices: Not Supported 00:11:57.403 Controller Attributes 00:11:57.403 128-bit Host Identifier: Not Supported 00:11:57.403 Non-Operational Permissive Mode: Not Supported 00:11:57.403 NVM Sets: Not Supported 00:11:57.403 Read Recovery Levels: Not Supported 00:11:57.403 Endurance Groups: Not Supported 00:11:57.403 Predictable Latency Mode: Not Supported 00:11:57.403 Traffic Based Keep ALive: Not Supported 00:11:57.403 Namespace Granularity: Not Supported 00:11:57.403 SQ Associations: Not Supported 00:11:57.403 UUID List: Not Supported 00:11:57.403 Multi-Domain Subsystem: Not Supported 00:11:57.403 Fixed Capacity Management: Not Supported 00:11:57.403 Variable Capacity Management: Not Supported 00:11:57.403 Delete Endurance Group: Not Supported 00:11:57.403 Delete NVM Set: Not Supported 00:11:57.403 Extended LBA Formats Supported: Supported 00:11:57.403 Flexible Data Placement Supported: Not Supported 00:11:57.403 00:11:57.403 Controller Memory Buffer Support 00:11:57.403 ================================ 00:11:57.403 Supported: No 00:11:57.403 00:11:57.403 Persistent Memory Region Support 00:11:57.403 ================================ 00:11:57.403 Supported: No 00:11:57.403 00:11:57.403 Admin Command Set Attributes 00:11:57.403 ============================ 00:11:57.403 Security Send/Receive: Not Supported 00:11:57.403 Format NVM: Supported 00:11:57.403 Firmware Activate/Download: Not Supported 00:11:57.403 Namespace Management: Supported 00:11:57.403 Device Self-Test: Not Supported 00:11:57.403 Directives: Supported 00:11:57.403 NVMe-MI: Not Supported 00:11:57.403 Virtualization Management: Not Supported 00:11:57.403 Doorbell Buffer Config: Supported 00:11:57.403 Get LBA Status Capability: Not Supported 00:11:57.403 Command & Feature Lockdown Capability: Not Supported 00:11:57.403 Abort Command Limit: 4 00:11:57.403 Async Event Request Limit: 4 00:11:57.403 Number of Firmware Slots: N/A 00:11:57.403 Firmware Slot 1 Read-Only: N/A 00:11:57.403 Firmware Activation Without Reset: N/A 00:11:57.403 Multiple Update Detection Support: N/A 00:11:57.403 Firmware Update Granularity: No Information Provided 00:11:57.403 Per-Namespace SMART Log: Yes 00:11:57.403 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.403 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:57.403 Command Effects Log Page: Supported 00:11:57.403 Get Log Page Extended Data: Supported 00:11:57.403 Telemetry Log Pages: Not Supported 00:11:57.403 Persistent Event Log Pages: Not Supported 00:11:57.403 Supported Log Pages Log Page: May Support 00:11:57.403 Commands Supported & Effects Log Page: Not Supported 00:11:57.403 Feature Identifiers & Effects Log Page:May Support 00:11:57.403 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.403 Data Area 4 for Telemetry Log: Not Supported 00:11:57.403 Error Log Page Entries Supported: 1 00:11:57.403 Keep Alive: Not Supported 00:11:57.403 00:11:57.403 NVM Command Set Attributes 00:11:57.403 ========================== 00:11:57.403 Submission Queue Entry Size 00:11:57.403 Max: 64 00:11:57.403 Min: 64 00:11:57.403 Completion Queue Entry Size 00:11:57.403 Max: 16 00:11:57.403 Min: 16 00:11:57.403 Number of Namespaces: 256 00:11:57.403 Compare Command: Supported 00:11:57.403 Write Uncorrectable Command: Not Supported 00:11:57.403 Dataset Management Command: Supported 00:11:57.403 Write Zeroes Command: Supported 00:11:57.403 Set Features Save Field: Supported 00:11:57.403 Reservations: Not Supported 00:11:57.403 Timestamp: Supported 00:11:57.403 Copy: Supported 00:11:57.403 Volatile Write Cache: Present 00:11:57.403 Atomic Write Unit (Normal): 1 00:11:57.403 Atomic Write Unit (PFail): 1 00:11:57.403 Atomic Compare & Write Unit: 1 00:11:57.403 Fused Compare & Write: Not Supported 00:11:57.403 Scatter-Gather List 00:11:57.403 SGL Command Set: Supported 00:11:57.403 SGL Keyed: Not Supported 00:11:57.403 SGL Bit Bucket Descriptor: Not Supported 00:11:57.403 SGL Metadata Pointer: Not Supported 00:11:57.403 Oversized SGL: Not Supported 00:11:57.403 SGL Metadata Address: Not Supported 00:11:57.403 SGL Offset: Not Supported 00:11:57.403 Transport SGL Data Block: Not Supported 00:11:57.403 Replay Protected Memory Block: Not Supported 00:11:57.403 00:11:57.403 Firmware Slot Information 00:11:57.403 ========================= 00:11:57.403 Active slot: 1 00:11:57.403 Slot 1 Firmware Revision: 1.0 00:11:57.403 00:11:57.403 00:11:57.403 Commands Supported and Effects 00:11:57.403 ============================== 00:11:57.403 Admin Commands 00:11:57.403 -------------- 00:11:57.403 Delete I/O Submission Queue (00h): Supported 00:11:57.403 Create I/O Submission Queue (01h): Supported 00:11:57.403 Get Log Page (02h): Supported 00:11:57.403 Delete I/O Completion Queue (04h): Supported 00:11:57.404 Create I/O Completion Queue (05h): Supported 00:11:57.404 Identify (06h): Supported 00:11:57.404 Abort (08h): Supported 00:11:57.404 Set Features (09h): Supported 00:11:57.404 Get Features (0Ah): Supported 00:11:57.404 Asynchronous Event Request (0Ch): Supported 00:11:57.404 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.404 Directive Send (19h): Supported 00:11:57.404 Directive Receive (1Ah): Supported 00:11:57.404 Virtualization Management (1Ch): Supported 00:11:57.404 Doorbell Buffer Config (7Ch): Supported 00:11:57.404 Format NVM (80h): Supported LBA-Change 00:11:57.404 I/O Commands 00:11:57.404 ------------ 00:11:57.404 Flush (00h): Supported LBA-Change 00:11:57.404 Write (01h): Supported LBA-Change 00:11:57.404 Read (02h): Supported 00:11:57.404 Compare (05h): Supported 00:11:57.404 Write Zeroes (08h): Supported LBA-Change 00:11:57.404 Dataset Management (09h): Supported LBA-Change 00:11:57.404 Unknown (0Ch): Supported 00:11:57.404 Unknown (12h): Supported 00:11:57.404 Copy (19h): Supported LBA-Change 00:11:57.404 Unknown (1Dh): Supported LBA-Change 00:11:57.404 00:11:57.404 Error Log 00:11:57.404 ========= 00:11:57.404 00:11:57.404 Arbitration 00:11:57.404 =========== 00:11:57.404 Arbitration Burst: no limit 00:11:57.404 00:11:57.404 Power Management 00:11:57.404 ================ 00:11:57.404 Number of Power States: 1 00:11:57.404 Current Power State: Power State #0 00:11:57.404 Power State #0: 00:11:57.404 Max Power: 25.00 W 00:11:57.404 Non-Operational State: Operational 00:11:57.404 Entry Latency: 16 microseconds 00:11:57.404 Exit Latency: 4 microseconds 00:11:57.404 Relative Read Throughput: 0 00:11:57.404 Relative Read Latency: 0 00:11:57.404 Relative Write Throughput: 0 00:11:57.404 Relative Write Latency: 0 00:11:57.404 Idle Power: Not Reported 00:11:57.404 Active Power: Not Reported 00:11:57.404 Non-Operational Permissive Mode: Not Supported 00:11:57.404 00:11:57.404 Health Information 00:11:57.404 ================== 00:11:57.404 Critical Warnings: 00:11:57.404 Available Spare Space: OK 00:11:57.404 Temperature: OK 00:11:57.404 Device Reliability: OK 00:11:57.404 Read Only: No 00:11:57.404 Volatile Memory Backup: OK 00:11:57.404 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.404 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.404 Available Spare: 0% 00:11:57.404 Available Spare Threshold: 0% 00:11:57.404 Life Percentage Used: 0% 00:11:57.404 Data Units Read: 1206 00:11:57.404 Data Units Written: 558 00:11:57.404 Host Read Commands: 57943 00:11:57.404 Host Write Commands: 28458 00:11:57.404 Controller Busy Time: 0 minutes 00:11:57.404 Power Cycles: 0 00:11:57.404 Power On Hours: 0 hours 00:11:57.404 Unsafe Shutdowns: 0 00:11:57.404 Unrecoverable Media Errors: 0 00:11:57.404 Lifetime Error Log Entries: 0 00:11:57.404 Warning Temperature Time: 0 minutes 00:11:57.404 Critical Temperature Time: 0 minutes 00:11:57.404 00:11:57.404 Number of Queues 00:11:57.404 ================ 00:11:57.404 Number of I/O Submission Queues: 64 00:11:57.404 Number of I/O Completion Queues: 64 00:11:57.404 00:11:57.404 ZNS Specific Controller Data 00:11:57.404 ============================ 00:11:57.404 Zone Append Size Limit: 0 00:11:57.404 00:11:57.404 00:11:57.404 Active Namespaces 00:11:57.404 ================= 00:11:57.404 Namespace ID:1 00:11:57.404 Error Recovery Timeout: Unlimited 00:11:57.404 Command Set Identifier: [2024-05-15 12:33:06.188148] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 65101 terminated unexpected 00:11:57.404 NVM (00h) 00:11:57.404 Deallocate: Supported 00:11:57.404 Deallocated/Unwritten Error: Supported 00:11:57.404 Deallocated Read Value: All 0x00 00:11:57.404 Deallocate in Write Zeroes: Not Supported 00:11:57.404 Deallocated Guard Field: 0xFFFF 00:11:57.404 Flush: Supported 00:11:57.404 Reservation: Not Supported 00:11:57.404 Namespace Sharing Capabilities: Private 00:11:57.404 Size (in LBAs): 1310720 (5GiB) 00:11:57.404 Capacity (in LBAs): 1310720 (5GiB) 00:11:57.404 Utilization (in LBAs): 1310720 (5GiB) 00:11:57.404 Thin Provisioning: Not Supported 00:11:57.404 Per-NS Atomic Units: No 00:11:57.404 Maximum Single Source Range Length: 128 00:11:57.404 Maximum Copy Length: 128 00:11:57.404 Maximum Source Range Count: 128 00:11:57.404 NGUID/EUI64 Never Reused: No 00:11:57.404 Namespace Write Protected: No 00:11:57.404 Number of LBA Formats: 8 00:11:57.404 Current LBA Format: LBA Format #04 00:11:57.404 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.404 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.404 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.404 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.404 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.404 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.404 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.404 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.404 00:11:57.404 ===================================================== 00:11:57.404 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:57.404 ===================================================== 00:11:57.404 Controller Capabilities/Features 00:11:57.404 ================================ 00:11:57.404 Vendor ID: 1b36 00:11:57.404 Subsystem Vendor ID: 1af4 00:11:57.404 Serial Number: 12343 00:11:57.404 Model Number: QEMU NVMe Ctrl 00:11:57.404 Firmware Version: 8.0.0 00:11:57.404 Recommended Arb Burst: 6 00:11:57.404 IEEE OUI Identifier: 00 54 52 00:11:57.404 Multi-path I/O 00:11:57.404 May have multiple subsystem ports: No 00:11:57.404 May have multiple controllers: Yes 00:11:57.404 Associated with SR-IOV VF: No 00:11:57.404 Max Data Transfer Size: 524288 00:11:57.404 Max Number of Namespaces: 256 00:11:57.404 Max Number of I/O Queues: 64 00:11:57.404 NVMe Specification Version (VS): 1.4 00:11:57.404 NVMe Specification Version (Identify): 1.4 00:11:57.404 Maximum Queue Entries: 2048 00:11:57.404 Contiguous Queues Required: Yes 00:11:57.404 Arbitration Mechanisms Supported 00:11:57.404 Weighted Round Robin: Not Supported 00:11:57.404 Vendor Specific: Not Supported 00:11:57.404 Reset Timeout: 7500 ms 00:11:57.404 Doorbell Stride: 4 bytes 00:11:57.404 NVM Subsystem Reset: Not Supported 00:11:57.404 Command Sets Supported 00:11:57.404 NVM Command Set: Supported 00:11:57.404 Boot Partition: Not Supported 00:11:57.404 Memory Page Size Minimum: 4096 bytes 00:11:57.404 Memory Page Size Maximum: 65536 bytes 00:11:57.404 Persistent Memory Region: Not Supported 00:11:57.404 Optional Asynchronous Events Supported 00:11:57.404 Namespace Attribute Notices: Supported 00:11:57.404 Firmware Activation Notices: Not Supported 00:11:57.404 ANA Change Notices: Not Supported 00:11:57.404 PLE Aggregate Log Change Notices: Not Supported 00:11:57.404 LBA Status Info Alert Notices: Not Supported 00:11:57.404 EGE Aggregate Log Change Notices: Not Supported 00:11:57.404 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.404 Zone Descriptor Change Notices: Not Supported 00:11:57.404 Discovery Log Change Notices: Not Supported 00:11:57.404 Controller Attributes 00:11:57.404 128-bit Host Identifier: Not Supported 00:11:57.404 Non-Operational Permissive Mode: Not Supported 00:11:57.404 NVM Sets: Not Supported 00:11:57.404 Read Recovery Levels: Not Supported 00:11:57.404 Endurance Groups: Supported 00:11:57.404 Predictable Latency Mode: Not Supported 00:11:57.404 Traffic Based Keep ALive: Not Supported 00:11:57.404 Namespace Granularity: Not Supported 00:11:57.404 SQ Associations: Not Supported 00:11:57.404 UUID List: Not Supported 00:11:57.404 Multi-Domain Subsystem: Not Supported 00:11:57.404 Fixed Capacity Management: Not Supported 00:11:57.404 Variable Capacity Management: Not Supported 00:11:57.404 Delete Endurance Group: Not Supported 00:11:57.404 Delete NVM Set: Not Supported 00:11:57.404 Extended LBA Formats Supported: Supported 00:11:57.404 Flexible Data Placement Supported: Supported 00:11:57.404 00:11:57.404 Controller Memory Buffer Support 00:11:57.404 ================================ 00:11:57.404 Supported: No 00:11:57.404 00:11:57.404 Persistent Memory Region Support 00:11:57.404 ================================ 00:11:57.404 Supported: No 00:11:57.404 00:11:57.404 Admin Command Set Attributes 00:11:57.404 ============================ 00:11:57.404 Security Send/Receive: Not Supported 00:11:57.404 Format NVM: Supported 00:11:57.404 Firmware Activate/Download: Not Supported 00:11:57.404 Namespace Management: Supported 00:11:57.404 Device Self-Test: Not Supported 00:11:57.404 Directives: Supported 00:11:57.404 NVMe-MI: Not Supported 00:11:57.404 Virtualization Management: Not Supported 00:11:57.404 Doorbell Buffer Config: Supported 00:11:57.404 Get LBA Status Capability: Not Supported 00:11:57.404 Command & Feature Lockdown Capability: Not Supported 00:11:57.404 Abort Command Limit: 4 00:11:57.404 Async Event Request Limit: 4 00:11:57.405 Number of Firmware Slots: N/A 00:11:57.405 Firmware Slot 1 Read-Only: N/A 00:11:57.405 Firmware Activation Without Reset: N/A 00:11:57.405 Multiple Update Detection Support: N/A 00:11:57.405 Firmware Update Granularity: No Information Provided 00:11:57.405 Per-Namespace SMART Log: Yes 00:11:57.405 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.405 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:57.405 Command Effects Log Page: Supported 00:11:57.405 Get Log Page Extended Data: Supported 00:11:57.405 Telemetry Log Pages: Not Supported 00:11:57.405 Persistent Event Log Pages: Not Supported 00:11:57.405 Supported Log Pages Log Page: May Support 00:11:57.405 Commands Supported & Effects Log Page: Not Supported 00:11:57.405 Feature Identifiers & Effects Log Page:May Support 00:11:57.405 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.405 Data Area 4 for Telemetry Log: Not Supported 00:11:57.405 Error Log Page Entries Supported: 1 00:11:57.405 Keep Alive: Not Supported 00:11:57.405 00:11:57.405 NVM Command Set Attributes 00:11:57.405 ========================== 00:11:57.405 Submission Queue Entry Size 00:11:57.405 Max: 64 00:11:57.405 Min: 64 00:11:57.405 Completion Queue Entry Size 00:11:57.405 Max: 16 00:11:57.405 Min: 16 00:11:57.405 Number of Namespaces: 256 00:11:57.405 Compare Command: Supported 00:11:57.405 Write Uncorrectable Command: Not Supported 00:11:57.405 Dataset Management Command: Supported 00:11:57.405 Write Zeroes Command: Supported 00:11:57.405 Set Features Save Field: Supported 00:11:57.405 Reservations: Not Supported 00:11:57.405 Timestamp: Supported 00:11:57.405 Copy: Supported 00:11:57.405 Volatile Write Cache: Present 00:11:57.405 Atomic Write Unit (Normal): 1 00:11:57.405 Atomic Write Unit (PFail): 1 00:11:57.405 Atomic Compare & Write Unit: 1 00:11:57.405 Fused Compare & Write: Not Supported 00:11:57.405 Scatter-Gather List 00:11:57.405 SGL Command Set: Supported 00:11:57.405 SGL Keyed: Not Supported 00:11:57.405 SGL Bit Bucket Descriptor: Not Supported 00:11:57.405 SGL Metadata Pointer: Not Supported 00:11:57.405 Oversized SGL: Not Supported 00:11:57.405 SGL Metadata Address: Not Supported 00:11:57.405 SGL Offset: Not Supported 00:11:57.405 Transport SGL Data Block: Not Supported 00:11:57.405 Replay Protected Memory Block: Not Supported 00:11:57.405 00:11:57.405 Firmware Slot Information 00:11:57.405 ========================= 00:11:57.405 Active slot: 1 00:11:57.405 Slot 1 Firmware Revision: 1.0 00:11:57.405 00:11:57.405 00:11:57.405 Commands Supported and Effects 00:11:57.405 ============================== 00:11:57.405 Admin Commands 00:11:57.405 -------------- 00:11:57.405 Delete I/O Submission Queue (00h): Supported 00:11:57.405 Create I/O Submission Queue (01h): Supported 00:11:57.405 Get Log Page (02h): Supported 00:11:57.405 Delete I/O Completion Queue (04h): Supported 00:11:57.405 Create I/O Completion Queue (05h): Supported 00:11:57.405 Identify (06h): Supported 00:11:57.405 Abort (08h): Supported 00:11:57.405 Set Features (09h): Supported 00:11:57.405 Get Features (0Ah): Supported 00:11:57.405 Asynchronous Event Request (0Ch): Supported 00:11:57.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.405 Directive Send (19h): Supported 00:11:57.405 Directive Receive (1Ah): Supported 00:11:57.405 Virtualization Management (1Ch): Supported 00:11:57.405 Doorbell Buffer Config (7Ch): Supported 00:11:57.405 Format NVM (80h): Supported LBA-Change 00:11:57.405 I/O Commands 00:11:57.405 ------------ 00:11:57.405 Flush (00h): Supported LBA-Change 00:11:57.405 Write (01h): Supported LBA-Change 00:11:57.405 Read (02h): Supported 00:11:57.405 Compare (05h): Supported 00:11:57.405 Write Zeroes (08h): Supported LBA-Change 00:11:57.405 Dataset Management (09h): Supported LBA-Change 00:11:57.405 Unknown (0Ch): Supported 00:11:57.405 Unknown (12h): Supported 00:11:57.405 Copy (19h): Supported LBA-Change 00:11:57.405 Unknown (1Dh): Supported LBA-Change 00:11:57.405 00:11:57.405 Error Log 00:11:57.405 ========= 00:11:57.405 00:11:57.405 Arbitration 00:11:57.405 =========== 00:11:57.405 Arbitration Burst: no limit 00:11:57.405 00:11:57.405 Power Management 00:11:57.405 ================ 00:11:57.405 Number of Power States: 1 00:11:57.405 Current Power State: Power State #0 00:11:57.405 Power State #0: 00:11:57.405 Max Power: 25.00 W 00:11:57.405 Non-Operational State: Operational 00:11:57.405 Entry Latency: 16 microseconds 00:11:57.405 Exit Latency: 4 microseconds 00:11:57.405 Relative Read Throughput: 0 00:11:57.405 Relative Read Latency: 0 00:11:57.405 Relative Write Throughput: 0 00:11:57.405 Relative Write Latency: 0 00:11:57.405 Idle Power: Not Reported 00:11:57.405 Active Power: Not Reported 00:11:57.405 Non-Operational Permissive Mode: Not Supported 00:11:57.405 00:11:57.405 Health Information 00:11:57.405 ================== 00:11:57.405 Critical Warnings: 00:11:57.405 Available Spare Space: OK 00:11:57.405 Temperature: OK 00:11:57.405 Device Reliability: OK 00:11:57.405 Read Only: No 00:11:57.405 Volatile Memory Backup: OK 00:11:57.405 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.405 Available Spare: 0% 00:11:57.405 Available Spare Threshold: 0% 00:11:57.405 Life Percentage Used: 0% 00:11:57.405 Data Units Read: 1188 00:11:57.405 Data Units Written: 566 00:11:57.405 Host Read Commands: 57380 00:11:57.405 Host Write Commands: 28582 00:11:57.405 Controller Busy Time: 0 minutes 00:11:57.405 Power Cycles: 0 00:11:57.405 Power On Hours: 0 hours 00:11:57.405 Unsafe Shutdowns: 0 00:11:57.405 Unrecoverable Media Errors: 0 00:11:57.405 Lifetime Error Log Entries: 0 00:11:57.405 Warning Temperature Time: 0 minutes 00:11:57.405 Critical Temperature Time: 0 minutes 00:11:57.405 00:11:57.405 Number of Queues 00:11:57.405 ================ 00:11:57.405 Number of I/O Submission Queues: 64 00:11:57.405 Number of I/O Completion Queues: 64 00:11:57.405 00:11:57.405 ZNS Specific Controller Data 00:11:57.405 ============================ 00:11:57.405 Zone Append Size Limit: 0 00:11:57.405 00:11:57.405 00:11:57.405 Active Namespaces 00:11:57.405 ================= 00:11:57.405 Namespace ID:1 00:11:57.405 Error Recovery Timeout: Unlimited 00:11:57.405 Command Set Identifier: NVM (00h) 00:11:57.405 Deallocate: Supported 00:11:57.405 Deallocated/Unwritten Error: Supported 00:11:57.405 Deallocated Read Value: All 0x00 00:11:57.405 Deallocate in Write Zeroes: Not Supported 00:11:57.405 Deallocated Guard Field: 0xFFFF 00:11:57.405 Flush: Supported 00:11:57.405 Reservation: Not Supported 00:11:57.405 Namespace Sharing Capabilities: Multiple Controllers 00:11:57.405 Size (in LBAs): 262144 (1GiB) 00:11:57.405 Capacity (in LBAs): 262144 (1GiB) 00:11:57.405 Utilization (in LBAs): 262144 (1GiB) 00:11:57.405 Thin Provisioning: Not Supported 00:11:57.405 Per-NS Atomic Units: No 00:11:57.405 Maximum Single Source Range Length: 128 00:11:57.405 Maximum Copy Length: 128 00:11:57.405 Maximum Source Range Count: 128 00:11:57.405 NGUID/EUI64 Never Reused: No 00:11:57.405 Namespace Write Protected: No 00:11:57.405 Endurance group ID: 1 00:11:57.405 Number of LBA Formats: 8 00:11:57.405 Current LBA Format: LBA Format #04 00:11:57.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.405 LBA Format #06: Data Si[2024-05-15 12:33:06.189660] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 65101 terminated unexpected 00:11:57.405 ze: 4096 Metadata Size: 16 00:11:57.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.405 00:11:57.405 Get Feature FDP: 00:11:57.405 ================ 00:11:57.405 Enabled: Yes 00:11:57.405 FDP configuration index: 0 00:11:57.405 00:11:57.405 FDP configurations log page 00:11:57.405 =========================== 00:11:57.405 Number of FDP configurations: 1 00:11:57.405 Version: 0 00:11:57.405 Size: 112 00:11:57.405 FDP Configuration Descriptor: 0 00:11:57.405 Descriptor Size: 96 00:11:57.405 Reclaim Group Identifier format: 2 00:11:57.405 FDP Volatile Write Cache: Not Present 00:11:57.405 FDP Configuration: Valid 00:11:57.405 Vendor Specific Size: 0 00:11:57.405 Number of Reclaim Groups: 2 00:11:57.405 Number of Recalim Unit Handles: 8 00:11:57.405 Max Placement Identifiers: 128 00:11:57.405 Number of Namespaces Suppprted: 256 00:11:57.405 Reclaim unit Nominal Size: 6000000 bytes 00:11:57.405 Estimated Reclaim Unit Time Limit: Not Reported 00:11:57.405 RUH Desc #000: RUH Type: Initially Isolated 00:11:57.405 RUH Desc #001: RUH Type: Initially Isolated 00:11:57.405 RUH Desc #002: RUH Type: Initially Isolated 00:11:57.405 RUH Desc #003: RUH Type: Initially Isolated 00:11:57.406 RUH Desc #004: RUH Type: Initially Isolated 00:11:57.406 RUH Desc #005: RUH Type: Initially Isolated 00:11:57.406 RUH Desc #006: RUH Type: Initially Isolated 00:11:57.406 RUH Desc #007: RUH Type: Initially Isolated 00:11:57.406 00:11:57.406 FDP reclaim unit handle usage log page 00:11:57.406 ====================================== 00:11:57.406 Number of Reclaim Unit Handles: 8 00:11:57.406 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:57.406 RUH Usage Desc #001: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #002: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #003: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #004: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #005: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #006: RUH Attributes: Unused 00:11:57.406 RUH Usage Desc #007: RUH Attributes: Unused 00:11:57.406 00:11:57.406 FDP statistics log page 00:11:57.406 ======================= 00:11:57.406 Host bytes with metadata written: 376270848 00:11:57.406 Media bytes with metadata written: 376352768 00:11:57.406 Media bytes erased: 0 00:11:57.406 00:11:57.406 FDP events log page 00:11:57.406 =================== 00:11:57.406 Number of FDP events: 0 00:11:57.406 00:11:57.406 ===================================================== 00:11:57.406 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:57.406 ===================================================== 00:11:57.406 Controller Capabilities/Features 00:11:57.406 ================================ 00:11:57.406 Vendor ID: 1b36 00:11:57.406 Subsystem Vendor ID: 1af4 00:11:57.406 Serial Number: 12342 00:11:57.406 Model Number: QEMU NVMe Ctrl 00:11:57.406 Firmware Version: 8.0.0 00:11:57.406 Recommended Arb Burst: 6 00:11:57.406 IEEE OUI Identifier: 00 54 52 00:11:57.406 Multi-path I/O 00:11:57.406 May have multiple subsystem ports: No 00:11:57.406 May have multiple controllers: No 00:11:57.406 Associated with SR-IOV VF: No 00:11:57.406 Max Data Transfer Size: 524288 00:11:57.406 Max Number of Namespaces: 256 00:11:57.406 Max Number of I/O Queues: 64 00:11:57.406 NVMe Specification Version (VS): 1.4 00:11:57.406 NVMe Specification Version (Identify): 1.4 00:11:57.406 Maximum Queue Entries: 2048 00:11:57.406 Contiguous Queues Required: Yes 00:11:57.406 Arbitration Mechanisms Supported 00:11:57.406 Weighted Round Robin: Not Supported 00:11:57.406 Vendor Specific: Not Supported 00:11:57.406 Reset Timeout: 7500 ms 00:11:57.406 Doorbell Stride: 4 bytes 00:11:57.406 NVM Subsystem Reset: Not Supported 00:11:57.406 Command Sets Supported 00:11:57.406 NVM Command Set: Supported 00:11:57.406 Boot Partition: Not Supported 00:11:57.406 Memory Page Size Minimum: 4096 bytes 00:11:57.406 Memory Page Size Maximum: 65536 bytes 00:11:57.406 Persistent Memory Region: Not Supported 00:11:57.406 Optional Asynchronous Events Supported 00:11:57.406 Namespace Attribute Notices: Supported 00:11:57.406 Firmware Activation Notices: Not Supported 00:11:57.406 ANA Change Notices: Not Supported 00:11:57.406 PLE Aggregate Log Change Notices: Not Supported 00:11:57.406 LBA Status Info Alert Notices: Not Supported 00:11:57.406 EGE Aggregate Log Change Notices: Not Supported 00:11:57.406 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.406 Zone Descriptor Change Notices: Not Supported 00:11:57.406 Discovery Log Change Notices: Not Supported 00:11:57.406 Controller Attributes 00:11:57.406 128-bit Host Identifier: Not Supported 00:11:57.406 Non-Operational Permissive Mode: Not Supported 00:11:57.406 NVM Sets: Not Supported 00:11:57.406 Read Recovery Levels: Not Supported 00:11:57.406 Endurance Groups: Not Supported 00:11:57.406 Predictable Latency Mode: Not Supported 00:11:57.406 Traffic Based Keep ALive: Not Supported 00:11:57.406 Namespace Granularity: Not Supported 00:11:57.406 SQ Associations: Not Supported 00:11:57.406 UUID List: Not Supported 00:11:57.406 Multi-Domain Subsystem: Not Supported 00:11:57.406 Fixed Capacity Management: Not Supported 00:11:57.406 Variable Capacity Management: Not Supported 00:11:57.406 Delete Endurance Group: Not Supported 00:11:57.406 Delete NVM Set: Not Supported 00:11:57.406 Extended LBA Formats Supported: Supported 00:11:57.406 Flexible Data Placement Supported: Not Supported 00:11:57.406 00:11:57.406 Controller Memory Buffer Support 00:11:57.406 ================================ 00:11:57.406 Supported: No 00:11:57.406 00:11:57.406 Persistent Memory Region Support 00:11:57.406 ================================ 00:11:57.406 Supported: No 00:11:57.406 00:11:57.406 Admin Command Set Attributes 00:11:57.406 ============================ 00:11:57.406 Security Send/Receive: Not Supported 00:11:57.406 Format NVM: Supported 00:11:57.406 Firmware Activate/Download: Not Supported 00:11:57.406 Namespace Management: Supported 00:11:57.406 Device Self-Test: Not Supported 00:11:57.406 Directives: Supported 00:11:57.406 NVMe-MI: Not Supported 00:11:57.406 Virtualization Management: Not Supported 00:11:57.406 Doorbell Buffer Config: Supported 00:11:57.406 Get LBA Status Capability: Not Supported 00:11:57.406 Command & Feature Lockdown Capability: Not Supported 00:11:57.406 Abort Command Limit: 4 00:11:57.406 Async Event Request Limit: 4 00:11:57.406 Number of Firmware Slots: N/A 00:11:57.406 Firmware Slot 1 Read-Only: N/A 00:11:57.406 Firmware Activation Without Reset: N/A 00:11:57.406 Multiple Update Detection Support: N/A 00:11:57.406 Firmware Update Granularity: No Information Provided 00:11:57.406 Per-Namespace SMART Log: Yes 00:11:57.406 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.406 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:57.406 Command Effects Log Page: Supported 00:11:57.406 Get Log Page Extended Data: Supported 00:11:57.406 Telemetry Log Pages: Not Supported 00:11:57.406 Persistent Event Log Pages: Not Supported 00:11:57.406 Supported Log Pages Log Page: May Support 00:11:57.406 Commands Supported & Effects Log Page: Not Supported 00:11:57.406 Feature Identifiers & Effects Log Page:May Support 00:11:57.406 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.406 Data Area 4 for Telemetry Log: Not Supported 00:11:57.406 Error Log Page Entries Supported: 1 00:11:57.406 Keep Alive: Not Supported 00:11:57.406 00:11:57.406 NVM Command Set Attributes 00:11:57.406 ========================== 00:11:57.406 Submission Queue Entry Size 00:11:57.406 Max: 64 00:11:57.406 Min: 64 00:11:57.406 Completion Queue Entry Size 00:11:57.406 Max: 16 00:11:57.406 Min: 16 00:11:57.406 Number of Namespaces: 256 00:11:57.406 Compare Command: Supported 00:11:57.406 Write Uncorrectable Command: Not Supported 00:11:57.406 Dataset Management Command: Supported 00:11:57.406 Write Zeroes Command: Supported 00:11:57.406 Set Features Save Field: Supported 00:11:57.406 Reservations: Not Supported 00:11:57.406 Timestamp: Supported 00:11:57.406 Copy: Supported 00:11:57.406 Volatile Write Cache: Present 00:11:57.406 Atomic Write Unit (Normal): 1 00:11:57.406 Atomic Write Unit (PFail): 1 00:11:57.406 Atomic Compare & Write Unit: 1 00:11:57.406 Fused Compare & Write: Not Supported 00:11:57.406 Scatter-Gather List 00:11:57.406 SGL Command Set: Supported 00:11:57.406 SGL Keyed: Not Supported 00:11:57.406 SGL Bit Bucket Descriptor: Not Supported 00:11:57.406 SGL Metadata Pointer: Not Supported 00:11:57.406 Oversized SGL: Not Supported 00:11:57.406 SGL Metadata Address: Not Supported 00:11:57.406 SGL Offset: Not Supported 00:11:57.406 Transport SGL Data Block: Not Supported 00:11:57.406 Replay Protected Memory Block: Not Supported 00:11:57.406 00:11:57.406 Firmware Slot Information 00:11:57.406 ========================= 00:11:57.406 Active slot: 1 00:11:57.407 Slot 1 Firmware Revision: 1.0 00:11:57.407 00:11:57.407 00:11:57.407 Commands Supported and Effects 00:11:57.407 ============================== 00:11:57.407 Admin Commands 00:11:57.407 -------------- 00:11:57.407 Delete I/O Submission Queue (00h): Supported 00:11:57.407 Create I/O Submission Queue (01h): Supported 00:11:57.407 Get Log Page (02h): Supported 00:11:57.407 Delete I/O Completion Queue (04h): Supported 00:11:57.407 Create I/O Completion Queue (05h): Supported 00:11:57.407 Identify (06h): Supported 00:11:57.407 Abort (08h): Supported 00:11:57.407 Set Features (09h): Supported 00:11:57.407 Get Features (0Ah): Supported 00:11:57.407 Asynchronous Event Request (0Ch): Supported 00:11:57.407 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.407 Directive Send (19h): Supported 00:11:57.407 Directive Receive (1Ah): Supported 00:11:57.407 Virtualization Management (1Ch): Supported 00:11:57.407 Doorbell Buffer Config (7Ch): Supported 00:11:57.407 Format NVM (80h): Supported LBA-Change 00:11:57.407 I/O Commands 00:11:57.407 ------------ 00:11:57.407 Flush (00h): Supported LBA-Change 00:11:57.407 Write (01h): Supported LBA-Change 00:11:57.407 Read (02h): Supported 00:11:57.407 Compare (05h): Supported 00:11:57.407 Write Zeroes (08h): Supported LBA-Change 00:11:57.407 Dataset Management (09h): Supported LBA-Change 00:11:57.407 Unknown (0Ch): Supported 00:11:57.407 Unknown (12h): Supported 00:11:57.407 Copy (19h): Supported LBA-Change 00:11:57.407 Unknown (1Dh): Supported LBA-Change 00:11:57.407 00:11:57.407 Error Log 00:11:57.407 ========= 00:11:57.407 00:11:57.407 Arbitration 00:11:57.407 =========== 00:11:57.407 Arbitration Burst: no limit 00:11:57.407 00:11:57.407 Power Management 00:11:57.407 ================ 00:11:57.407 Number of Power States: 1 00:11:57.407 Current Power State: Power State #0 00:11:57.407 Power State #0: 00:11:57.407 Max Power: 25.00 W 00:11:57.407 Non-Operational State: Operational 00:11:57.407 Entry Latency: 16 microseconds 00:11:57.407 Exit Latency: 4 microseconds 00:11:57.407 Relative Read Throughput: 0 00:11:57.407 Relative Read Latency: 0 00:11:57.407 Relative Write Throughput: 0 00:11:57.407 Relative Write Latency: 0 00:11:57.407 Idle Power: Not Reported 00:11:57.407 Active Power: Not Reported 00:11:57.407 Non-Operational Permissive Mode: Not Supported 00:11:57.407 00:11:57.407 Health Information 00:11:57.407 ================== 00:11:57.407 Critical Warnings: 00:11:57.407 Available Spare Space: OK 00:11:57.407 Temperature: OK 00:11:57.407 Device Reliability: OK 00:11:57.407 Read Only: No 00:11:57.407 Volatile Memory Backup: OK 00:11:57.407 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.407 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.407 Available Spare: 0% 00:11:57.407 Available Spare Threshold: 0% 00:11:57.407 Life Percentage Used: 0% 00:11:57.407 Data Units Read: 3630 00:11:57.407 Data Units Written: 1673 00:11:57.407 Host Read Commands: 174237 00:11:57.407 Host Write Commands: 85487 00:11:57.407 Controller Busy Time: 0 minutes 00:11:57.407 Power Cycles: 0 00:11:57.407 Power On Hours: 0 hours 00:11:57.407 Unsafe Shutdowns: 0 00:11:57.407 Unrecoverable Media Errors: 0 00:11:57.407 Lifetime Error Log Entries: 0 00:11:57.407 Warning Temperature Time: 0 minutes 00:11:57.407 Critical Temperature Time: 0 minutes 00:11:57.407 00:11:57.407 Number of Queues 00:11:57.407 ================ 00:11:57.407 Number of I/O Submission Queues: 64 00:11:57.407 Number of I/O Completion Queues: 64 00:11:57.407 00:11:57.407 ZNS Specific Controller Data 00:11:57.407 ============================ 00:11:57.407 Zone Append Size Limit: 0 00:11:57.407 00:11:57.407 00:11:57.407 Active Namespaces 00:11:57.407 ================= 00:11:57.407 Namespace ID:1 00:11:57.407 Error Recovery Timeout: Unlimited 00:11:57.407 Command Set Identifier: NVM (00h) 00:11:57.407 Deallocate: Supported 00:11:57.407 Deallocated/Unwritten Error: Supported 00:11:57.407 Deallocated Read Value: All 0x00 00:11:57.407 Deallocate in Write Zeroes: Not Supported 00:11:57.407 Deallocated Guard Field: 0xFFFF 00:11:57.407 Flush: Supported 00:11:57.407 Reservation: Not Supported 00:11:57.407 Namespace Sharing Capabilities: Private 00:11:57.407 Size (in LBAs): 1048576 (4GiB) 00:11:57.407 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.407 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.407 Thin Provisioning: Not Supported 00:11:57.407 Per-NS Atomic Units: No 00:11:57.407 Maximum Single Source Range Length: 128 00:11:57.407 Maximum Copy Length: 128 00:11:57.407 Maximum Source Range Count: 128 00:11:57.407 NGUID/EUI64 Never Reused: No 00:11:57.407 Namespace Write Protected: No 00:11:57.407 Number of LBA Formats: 8 00:11:57.407 Current LBA Format: LBA Format #04 00:11:57.407 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.407 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.407 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.407 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.407 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.407 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.407 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.407 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.407 00:11:57.407 Namespace ID:2 00:11:57.407 Error Recovery Timeout: Unlimited 00:11:57.407 Command Set Identifier: NVM (00h) 00:11:57.407 Deallocate: Supported 00:11:57.407 Deallocated/Unwritten Error: Supported 00:11:57.407 Deallocated Read Value: All 0x00 00:11:57.407 Deallocate in Write Zeroes: Not Supported 00:11:57.407 Deallocated Guard Field: 0xFFFF 00:11:57.407 Flush: Supported 00:11:57.407 Reservation: Not Supported 00:11:57.407 Namespace Sharing Capabilities: Private 00:11:57.407 Size (in LBAs): 1048576 (4GiB) 00:11:57.407 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.407 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.407 Thin Provisioning: Not Supported 00:11:57.407 Per-NS Atomic Units: No 00:11:57.407 Maximum Single Source Range Length: 128 00:11:57.407 Maximum Copy Length: 128 00:11:57.407 Maximum Source Range Count: 128 00:11:57.407 NGUID/EUI64 Never Reused: No 00:11:57.407 Namespace Write Protected: No 00:11:57.407 Number of LBA Formats: 8 00:11:57.407 Current LBA Format: LBA Format #04 00:11:57.407 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.407 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.407 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.407 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.407 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.407 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.407 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.407 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.407 00:11:57.407 Namespace ID:3 00:11:57.407 Error Recovery Timeout: Unlimited 00:11:57.407 Command Set Identifier: NVM (00h) 00:11:57.407 Deallocate: Supported 00:11:57.407 Deallocated/Unwritten Error: Supported 00:11:57.407 Deallocated Read Value: All 0x00 00:11:57.407 Deallocate in Write Zeroes: Not Supported 00:11:57.407 Deallocated Guard Field: 0xFFFF 00:11:57.407 Flush: Supported 00:11:57.407 Reservation: Not Supported 00:11:57.407 Namespace Sharing Capabilities: Private 00:11:57.407 Size (in LBAs): 1048576 (4GiB) 00:11:57.407 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.407 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.407 Thin Provisioning: Not Supported 00:11:57.407 Per-NS Atomic Units: No 00:11:57.407 Maximum Single Source Range Length: 128 00:11:57.407 Maximum Copy Length: 128 00:11:57.407 Maximum Source Range Count: 128 00:11:57.407 NGUID/EUI64 Never Reused: No 00:11:57.407 Namespace Write Protected: No 00:11:57.407 Number of LBA Formats: 8 00:11:57.407 Current LBA Format: LBA Format #04 00:11:57.407 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.407 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.407 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.407 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.407 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.407 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.407 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.407 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.407 00:11:57.407 12:33:06 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.407 12:33:06 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:57.689 ===================================================== 00:11:57.689 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:57.689 ===================================================== 00:11:57.689 Controller Capabilities/Features 00:11:57.689 ================================ 00:11:57.689 Vendor ID: 1b36 00:11:57.689 Subsystem Vendor ID: 1af4 00:11:57.689 Serial Number: 12340 00:11:57.689 Model Number: QEMU NVMe Ctrl 00:11:57.689 Firmware Version: 8.0.0 00:11:57.689 Recommended Arb Burst: 6 00:11:57.689 IEEE OUI Identifier: 00 54 52 00:11:57.689 Multi-path I/O 00:11:57.689 May have multiple subsystem ports: No 00:11:57.689 May have multiple controllers: No 00:11:57.689 Associated with SR-IOV VF: No 00:11:57.689 Max Data Transfer Size: 524288 00:11:57.689 Max Number of Namespaces: 256 00:11:57.689 Max Number of I/O Queues: 64 00:11:57.689 NVMe Specification Version (VS): 1.4 00:11:57.689 NVMe Specification Version (Identify): 1.4 00:11:57.689 Maximum Queue Entries: 2048 00:11:57.689 Contiguous Queues Required: Yes 00:11:57.689 Arbitration Mechanisms Supported 00:11:57.689 Weighted Round Robin: Not Supported 00:11:57.689 Vendor Specific: Not Supported 00:11:57.689 Reset Timeout: 7500 ms 00:11:57.689 Doorbell Stride: 4 bytes 00:11:57.689 NVM Subsystem Reset: Not Supported 00:11:57.689 Command Sets Supported 00:11:57.689 NVM Command Set: Supported 00:11:57.689 Boot Partition: Not Supported 00:11:57.689 Memory Page Size Minimum: 4096 bytes 00:11:57.689 Memory Page Size Maximum: 65536 bytes 00:11:57.689 Persistent Memory Region: Not Supported 00:11:57.689 Optional Asynchronous Events Supported 00:11:57.689 Namespace Attribute Notices: Supported 00:11:57.689 Firmware Activation Notices: Not Supported 00:11:57.689 ANA Change Notices: Not Supported 00:11:57.689 PLE Aggregate Log Change Notices: Not Supported 00:11:57.689 LBA Status Info Alert Notices: Not Supported 00:11:57.689 EGE Aggregate Log Change Notices: Not Supported 00:11:57.689 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.689 Zone Descriptor Change Notices: Not Supported 00:11:57.689 Discovery Log Change Notices: Not Supported 00:11:57.689 Controller Attributes 00:11:57.689 128-bit Host Identifier: Not Supported 00:11:57.689 Non-Operational Permissive Mode: Not Supported 00:11:57.689 NVM Sets: Not Supported 00:11:57.689 Read Recovery Levels: Not Supported 00:11:57.689 Endurance Groups: Not Supported 00:11:57.689 Predictable Latency Mode: Not Supported 00:11:57.689 Traffic Based Keep ALive: Not Supported 00:11:57.689 Namespace Granularity: Not Supported 00:11:57.689 SQ Associations: Not Supported 00:11:57.689 UUID List: Not Supported 00:11:57.689 Multi-Domain Subsystem: Not Supported 00:11:57.689 Fixed Capacity Management: Not Supported 00:11:57.689 Variable Capacity Management: Not Supported 00:11:57.689 Delete Endurance Group: Not Supported 00:11:57.689 Delete NVM Set: Not Supported 00:11:57.690 Extended LBA Formats Supported: Supported 00:11:57.690 Flexible Data Placement Supported: Not Supported 00:11:57.690 00:11:57.690 Controller Memory Buffer Support 00:11:57.690 ================================ 00:11:57.690 Supported: No 00:11:57.690 00:11:57.690 Persistent Memory Region Support 00:11:57.690 ================================ 00:11:57.690 Supported: No 00:11:57.690 00:11:57.690 Admin Command Set Attributes 00:11:57.690 ============================ 00:11:57.690 Security Send/Receive: Not Supported 00:11:57.690 Format NVM: Supported 00:11:57.690 Firmware Activate/Download: Not Supported 00:11:57.690 Namespace Management: Supported 00:11:57.690 Device Self-Test: Not Supported 00:11:57.690 Directives: Supported 00:11:57.690 NVMe-MI: Not Supported 00:11:57.690 Virtualization Management: Not Supported 00:11:57.690 Doorbell Buffer Config: Supported 00:11:57.690 Get LBA Status Capability: Not Supported 00:11:57.690 Command & Feature Lockdown Capability: Not Supported 00:11:57.690 Abort Command Limit: 4 00:11:57.690 Async Event Request Limit: 4 00:11:57.690 Number of Firmware Slots: N/A 00:11:57.690 Firmware Slot 1 Read-Only: N/A 00:11:57.690 Firmware Activation Without Reset: N/A 00:11:57.690 Multiple Update Detection Support: N/A 00:11:57.690 Firmware Update Granularity: No Information Provided 00:11:57.690 Per-Namespace SMART Log: Yes 00:11:57.690 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.690 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:57.690 Command Effects Log Page: Supported 00:11:57.690 Get Log Page Extended Data: Supported 00:11:57.690 Telemetry Log Pages: Not Supported 00:11:57.690 Persistent Event Log Pages: Not Supported 00:11:57.690 Supported Log Pages Log Page: May Support 00:11:57.690 Commands Supported & Effects Log Page: Not Supported 00:11:57.690 Feature Identifiers & Effects Log Page:May Support 00:11:57.690 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.690 Data Area 4 for Telemetry Log: Not Supported 00:11:57.690 Error Log Page Entries Supported: 1 00:11:57.690 Keep Alive: Not Supported 00:11:57.690 00:11:57.690 NVM Command Set Attributes 00:11:57.690 ========================== 00:11:57.690 Submission Queue Entry Size 00:11:57.690 Max: 64 00:11:57.690 Min: 64 00:11:57.690 Completion Queue Entry Size 00:11:57.690 Max: 16 00:11:57.690 Min: 16 00:11:57.690 Number of Namespaces: 256 00:11:57.690 Compare Command: Supported 00:11:57.690 Write Uncorrectable Command: Not Supported 00:11:57.690 Dataset Management Command: Supported 00:11:57.690 Write Zeroes Command: Supported 00:11:57.690 Set Features Save Field: Supported 00:11:57.690 Reservations: Not Supported 00:11:57.690 Timestamp: Supported 00:11:57.690 Copy: Supported 00:11:57.690 Volatile Write Cache: Present 00:11:57.690 Atomic Write Unit (Normal): 1 00:11:57.690 Atomic Write Unit (PFail): 1 00:11:57.690 Atomic Compare & Write Unit: 1 00:11:57.690 Fused Compare & Write: Not Supported 00:11:57.690 Scatter-Gather List 00:11:57.690 SGL Command Set: Supported 00:11:57.690 SGL Keyed: Not Supported 00:11:57.690 SGL Bit Bucket Descriptor: Not Supported 00:11:57.690 SGL Metadata Pointer: Not Supported 00:11:57.690 Oversized SGL: Not Supported 00:11:57.690 SGL Metadata Address: Not Supported 00:11:57.690 SGL Offset: Not Supported 00:11:57.690 Transport SGL Data Block: Not Supported 00:11:57.690 Replay Protected Memory Block: Not Supported 00:11:57.690 00:11:57.690 Firmware Slot Information 00:11:57.690 ========================= 00:11:57.690 Active slot: 1 00:11:57.690 Slot 1 Firmware Revision: 1.0 00:11:57.690 00:11:57.690 00:11:57.690 Commands Supported and Effects 00:11:57.690 ============================== 00:11:57.690 Admin Commands 00:11:57.690 -------------- 00:11:57.690 Delete I/O Submission Queue (00h): Supported 00:11:57.690 Create I/O Submission Queue (01h): Supported 00:11:57.690 Get Log Page (02h): Supported 00:11:57.690 Delete I/O Completion Queue (04h): Supported 00:11:57.690 Create I/O Completion Queue (05h): Supported 00:11:57.690 Identify (06h): Supported 00:11:57.690 Abort (08h): Supported 00:11:57.690 Set Features (09h): Supported 00:11:57.690 Get Features (0Ah): Supported 00:11:57.690 Asynchronous Event Request (0Ch): Supported 00:11:57.690 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.690 Directive Send (19h): Supported 00:11:57.690 Directive Receive (1Ah): Supported 00:11:57.690 Virtualization Management (1Ch): Supported 00:11:57.690 Doorbell Buffer Config (7Ch): Supported 00:11:57.690 Format NVM (80h): Supported LBA-Change 00:11:57.690 I/O Commands 00:11:57.690 ------------ 00:11:57.690 Flush (00h): Supported LBA-Change 00:11:57.690 Write (01h): Supported LBA-Change 00:11:57.690 Read (02h): Supported 00:11:57.690 Compare (05h): Supported 00:11:57.690 Write Zeroes (08h): Supported LBA-Change 00:11:57.690 Dataset Management (09h): Supported LBA-Change 00:11:57.690 Unknown (0Ch): Supported 00:11:57.690 Unknown (12h): Supported 00:11:57.690 Copy (19h): Supported LBA-Change 00:11:57.690 Unknown (1Dh): Supported LBA-Change 00:11:57.690 00:11:57.690 Error Log 00:11:57.690 ========= 00:11:57.690 00:11:57.690 Arbitration 00:11:57.690 =========== 00:11:57.690 Arbitration Burst: no limit 00:11:57.690 00:11:57.690 Power Management 00:11:57.690 ================ 00:11:57.690 Number of Power States: 1 00:11:57.690 Current Power State: Power State #0 00:11:57.690 Power State #0: 00:11:57.690 Max Power: 25.00 W 00:11:57.690 Non-Operational State: Operational 00:11:57.690 Entry Latency: 16 microseconds 00:11:57.690 Exit Latency: 4 microseconds 00:11:57.690 Relative Read Throughput: 0 00:11:57.690 Relative Read Latency: 0 00:11:57.690 Relative Write Throughput: 0 00:11:57.690 Relative Write Latency: 0 00:11:57.690 Idle Power: Not Reported 00:11:57.690 Active Power: Not Reported 00:11:57.690 Non-Operational Permissive Mode: Not Supported 00:11:57.690 00:11:57.690 Health Information 00:11:57.690 ================== 00:11:57.690 Critical Warnings: 00:11:57.690 Available Spare Space: OK 00:11:57.690 Temperature: OK 00:11:57.690 Device Reliability: OK 00:11:57.690 Read Only: No 00:11:57.690 Volatile Memory Backup: OK 00:11:57.690 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.690 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.690 Available Spare: 0% 00:11:57.690 Available Spare Threshold: 0% 00:11:57.690 Life Percentage Used: 0% 00:11:57.690 Data Units Read: 1753 00:11:57.690 Data Units Written: 804 00:11:57.690 Host Read Commands: 84438 00:11:57.690 Host Write Commands: 41811 00:11:57.690 Controller Busy Time: 0 minutes 00:11:57.690 Power Cycles: 0 00:11:57.690 Power On Hours: 0 hours 00:11:57.690 Unsafe Shutdowns: 0 00:11:57.690 Unrecoverable Media Errors: 0 00:11:57.690 Lifetime Error Log Entries: 0 00:11:57.690 Warning Temperature Time: 0 minutes 00:11:57.690 Critical Temperature Time: 0 minutes 00:11:57.690 00:11:57.690 Number of Queues 00:11:57.690 ================ 00:11:57.690 Number of I/O Submission Queues: 64 00:11:57.690 Number of I/O Completion Queues: 64 00:11:57.690 00:11:57.690 ZNS Specific Controller Data 00:11:57.690 ============================ 00:11:57.690 Zone Append Size Limit: 0 00:11:57.690 00:11:57.690 00:11:57.690 Active Namespaces 00:11:57.690 ================= 00:11:57.690 Namespace ID:1 00:11:57.690 Error Recovery Timeout: Unlimited 00:11:57.690 Command Set Identifier: NVM (00h) 00:11:57.690 Deallocate: Supported 00:11:57.690 Deallocated/Unwritten Error: Supported 00:11:57.690 Deallocated Read Value: All 0x00 00:11:57.690 Deallocate in Write Zeroes: Not Supported 00:11:57.690 Deallocated Guard Field: 0xFFFF 00:11:57.690 Flush: Supported 00:11:57.690 Reservation: Not Supported 00:11:57.690 Metadata Transferred as: Separate Metadata Buffer 00:11:57.690 Namespace Sharing Capabilities: Private 00:11:57.690 Size (in LBAs): 1548666 (5GiB) 00:11:57.690 Capacity (in LBAs): 1548666 (5GiB) 00:11:57.690 Utilization (in LBAs): 1548666 (5GiB) 00:11:57.690 Thin Provisioning: Not Supported 00:11:57.690 Per-NS Atomic Units: No 00:11:57.690 Maximum Single Source Range Length: 128 00:11:57.690 Maximum Copy Length: 128 00:11:57.690 Maximum Source Range Count: 128 00:11:57.690 NGUID/EUI64 Never Reused: No 00:11:57.690 Namespace Write Protected: No 00:11:57.690 Number of LBA Formats: 8 00:11:57.690 Current LBA Format: LBA Format #07 00:11:57.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.690 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.690 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.690 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.690 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.690 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.690 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.690 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.690 00:11:57.690 12:33:06 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.690 12:33:06 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:11:57.950 ===================================================== 00:11:57.950 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:57.950 ===================================================== 00:11:57.950 Controller Capabilities/Features 00:11:57.950 ================================ 00:11:57.950 Vendor ID: 1b36 00:11:57.950 Subsystem Vendor ID: 1af4 00:11:57.950 Serial Number: 12341 00:11:57.950 Model Number: QEMU NVMe Ctrl 00:11:57.950 Firmware Version: 8.0.0 00:11:57.950 Recommended Arb Burst: 6 00:11:57.950 IEEE OUI Identifier: 00 54 52 00:11:57.950 Multi-path I/O 00:11:57.950 May have multiple subsystem ports: No 00:11:57.950 May have multiple controllers: No 00:11:57.950 Associated with SR-IOV VF: No 00:11:57.950 Max Data Transfer Size: 524288 00:11:57.950 Max Number of Namespaces: 256 00:11:57.950 Max Number of I/O Queues: 64 00:11:57.950 NVMe Specification Version (VS): 1.4 00:11:57.950 NVMe Specification Version (Identify): 1.4 00:11:57.950 Maximum Queue Entries: 2048 00:11:57.950 Contiguous Queues Required: Yes 00:11:57.950 Arbitration Mechanisms Supported 00:11:57.950 Weighted Round Robin: Not Supported 00:11:57.950 Vendor Specific: Not Supported 00:11:57.950 Reset Timeout: 7500 ms 00:11:57.950 Doorbell Stride: 4 bytes 00:11:57.950 NVM Subsystem Reset: Not Supported 00:11:57.950 Command Sets Supported 00:11:57.950 NVM Command Set: Supported 00:11:57.950 Boot Partition: Not Supported 00:11:57.950 Memory Page Size Minimum: 4096 bytes 00:11:57.950 Memory Page Size Maximum: 65536 bytes 00:11:57.950 Persistent Memory Region: Not Supported 00:11:57.950 Optional Asynchronous Events Supported 00:11:57.950 Namespace Attribute Notices: Supported 00:11:57.950 Firmware Activation Notices: Not Supported 00:11:57.950 ANA Change Notices: Not Supported 00:11:57.950 PLE Aggregate Log Change Notices: Not Supported 00:11:57.950 LBA Status Info Alert Notices: Not Supported 00:11:57.950 EGE Aggregate Log Change Notices: Not Supported 00:11:57.950 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.950 Zone Descriptor Change Notices: Not Supported 00:11:57.950 Discovery Log Change Notices: Not Supported 00:11:57.950 Controller Attributes 00:11:57.950 128-bit Host Identifier: Not Supported 00:11:57.950 Non-Operational Permissive Mode: Not Supported 00:11:57.950 NVM Sets: Not Supported 00:11:57.950 Read Recovery Levels: Not Supported 00:11:57.950 Endurance Groups: Not Supported 00:11:57.950 Predictable Latency Mode: Not Supported 00:11:57.950 Traffic Based Keep ALive: Not Supported 00:11:57.950 Namespace Granularity: Not Supported 00:11:57.950 SQ Associations: Not Supported 00:11:57.950 UUID List: Not Supported 00:11:57.950 Multi-Domain Subsystem: Not Supported 00:11:57.950 Fixed Capacity Management: Not Supported 00:11:57.950 Variable Capacity Management: Not Supported 00:11:57.950 Delete Endurance Group: Not Supported 00:11:57.950 Delete NVM Set: Not Supported 00:11:57.950 Extended LBA Formats Supported: Supported 00:11:57.950 Flexible Data Placement Supported: Not Supported 00:11:57.950 00:11:57.950 Controller Memory Buffer Support 00:11:57.950 ================================ 00:11:57.950 Supported: No 00:11:57.950 00:11:57.950 Persistent Memory Region Support 00:11:57.950 ================================ 00:11:57.950 Supported: No 00:11:57.950 00:11:57.950 Admin Command Set Attributes 00:11:57.950 ============================ 00:11:57.950 Security Send/Receive: Not Supported 00:11:57.950 Format NVM: Supported 00:11:57.950 Firmware Activate/Download: Not Supported 00:11:57.950 Namespace Management: Supported 00:11:57.950 Device Self-Test: Not Supported 00:11:57.950 Directives: Supported 00:11:57.950 NVMe-MI: Not Supported 00:11:57.950 Virtualization Management: Not Supported 00:11:57.950 Doorbell Buffer Config: Supported 00:11:57.950 Get LBA Status Capability: Not Supported 00:11:57.950 Command & Feature Lockdown Capability: Not Supported 00:11:57.950 Abort Command Limit: 4 00:11:57.950 Async Event Request Limit: 4 00:11:57.950 Number of Firmware Slots: N/A 00:11:57.950 Firmware Slot 1 Read-Only: N/A 00:11:57.950 Firmware Activation Without Reset: N/A 00:11:57.950 Multiple Update Detection Support: N/A 00:11:57.950 Firmware Update Granularity: No Information Provided 00:11:57.950 Per-Namespace SMART Log: Yes 00:11:57.950 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.950 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:57.950 Command Effects Log Page: Supported 00:11:57.950 Get Log Page Extended Data: Supported 00:11:57.950 Telemetry Log Pages: Not Supported 00:11:57.950 Persistent Event Log Pages: Not Supported 00:11:57.950 Supported Log Pages Log Page: May Support 00:11:57.950 Commands Supported & Effects Log Page: Not Supported 00:11:57.950 Feature Identifiers & Effects Log Page:May Support 00:11:57.950 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.950 Data Area 4 for Telemetry Log: Not Supported 00:11:57.950 Error Log Page Entries Supported: 1 00:11:57.950 Keep Alive: Not Supported 00:11:57.950 00:11:57.950 NVM Command Set Attributes 00:11:57.950 ========================== 00:11:57.950 Submission Queue Entry Size 00:11:57.950 Max: 64 00:11:57.950 Min: 64 00:11:57.950 Completion Queue Entry Size 00:11:57.950 Max: 16 00:11:57.950 Min: 16 00:11:57.950 Number of Namespaces: 256 00:11:57.950 Compare Command: Supported 00:11:57.950 Write Uncorrectable Command: Not Supported 00:11:57.950 Dataset Management Command: Supported 00:11:57.950 Write Zeroes Command: Supported 00:11:57.950 Set Features Save Field: Supported 00:11:57.950 Reservations: Not Supported 00:11:57.950 Timestamp: Supported 00:11:57.950 Copy: Supported 00:11:57.950 Volatile Write Cache: Present 00:11:57.950 Atomic Write Unit (Normal): 1 00:11:57.950 Atomic Write Unit (PFail): 1 00:11:57.950 Atomic Compare & Write Unit: 1 00:11:57.950 Fused Compare & Write: Not Supported 00:11:57.950 Scatter-Gather List 00:11:57.950 SGL Command Set: Supported 00:11:57.950 SGL Keyed: Not Supported 00:11:57.950 SGL Bit Bucket Descriptor: Not Supported 00:11:57.950 SGL Metadata Pointer: Not Supported 00:11:57.950 Oversized SGL: Not Supported 00:11:57.950 SGL Metadata Address: Not Supported 00:11:57.950 SGL Offset: Not Supported 00:11:57.950 Transport SGL Data Block: Not Supported 00:11:57.950 Replay Protected Memory Block: Not Supported 00:11:57.950 00:11:57.950 Firmware Slot Information 00:11:57.950 ========================= 00:11:57.950 Active slot: 1 00:11:57.950 Slot 1 Firmware Revision: 1.0 00:11:57.950 00:11:57.950 00:11:57.950 Commands Supported and Effects 00:11:57.950 ============================== 00:11:57.950 Admin Commands 00:11:57.950 -------------- 00:11:57.950 Delete I/O Submission Queue (00h): Supported 00:11:57.950 Create I/O Submission Queue (01h): Supported 00:11:57.950 Get Log Page (02h): Supported 00:11:57.950 Delete I/O Completion Queue (04h): Supported 00:11:57.950 Create I/O Completion Queue (05h): Supported 00:11:57.950 Identify (06h): Supported 00:11:57.950 Abort (08h): Supported 00:11:57.950 Set Features (09h): Supported 00:11:57.950 Get Features (0Ah): Supported 00:11:57.950 Asynchronous Event Request (0Ch): Supported 00:11:57.950 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.950 Directive Send (19h): Supported 00:11:57.950 Directive Receive (1Ah): Supported 00:11:57.950 Virtualization Management (1Ch): Supported 00:11:57.950 Doorbell Buffer Config (7Ch): Supported 00:11:57.950 Format NVM (80h): Supported LBA-Change 00:11:57.950 I/O Commands 00:11:57.950 ------------ 00:11:57.951 Flush (00h): Supported LBA-Change 00:11:57.951 Write (01h): Supported LBA-Change 00:11:57.951 Read (02h): Supported 00:11:57.951 Compare (05h): Supported 00:11:57.951 Write Zeroes (08h): Supported LBA-Change 00:11:57.951 Dataset Management (09h): Supported LBA-Change 00:11:57.951 Unknown (0Ch): Supported 00:11:57.951 Unknown (12h): Supported 00:11:57.951 Copy (19h): Supported LBA-Change 00:11:57.951 Unknown (1Dh): Supported LBA-Change 00:11:57.951 00:11:57.951 Error Log 00:11:57.951 ========= 00:11:57.951 00:11:57.951 Arbitration 00:11:57.951 =========== 00:11:57.951 Arbitration Burst: no limit 00:11:57.951 00:11:57.951 Power Management 00:11:57.951 ================ 00:11:57.951 Number of Power States: 1 00:11:57.951 Current Power State: Power State #0 00:11:57.951 Power State #0: 00:11:57.951 Max Power: 25.00 W 00:11:57.951 Non-Operational State: Operational 00:11:57.951 Entry Latency: 16 microseconds 00:11:57.951 Exit Latency: 4 microseconds 00:11:57.951 Relative Read Throughput: 0 00:11:57.951 Relative Read Latency: 0 00:11:57.951 Relative Write Throughput: 0 00:11:57.951 Relative Write Latency: 0 00:11:57.951 Idle Power: Not Reported 00:11:57.951 Active Power: Not Reported 00:11:57.951 Non-Operational Permissive Mode: Not Supported 00:11:57.951 00:11:57.951 Health Information 00:11:57.951 ================== 00:11:57.951 Critical Warnings: 00:11:57.951 Available Spare Space: OK 00:11:57.951 Temperature: OK 00:11:57.951 Device Reliability: OK 00:11:57.951 Read Only: No 00:11:57.951 Volatile Memory Backup: OK 00:11:57.951 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.951 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.951 Available Spare: 0% 00:11:57.951 Available Spare Threshold: 0% 00:11:57.951 Life Percentage Used: 0% 00:11:57.951 Data Units Read: 1206 00:11:57.951 Data Units Written: 558 00:11:57.951 Host Read Commands: 57943 00:11:57.951 Host Write Commands: 28458 00:11:57.951 Controller Busy Time: 0 minutes 00:11:57.951 Power Cycles: 0 00:11:57.951 Power On Hours: 0 hours 00:11:57.951 Unsafe Shutdowns: 0 00:11:57.951 Unrecoverable Media Errors: 0 00:11:57.951 Lifetime Error Log Entries: 0 00:11:57.951 Warning Temperature Time: 0 minutes 00:11:57.951 Critical Temperature Time: 0 minutes 00:11:57.951 00:11:57.951 Number of Queues 00:11:57.951 ================ 00:11:57.951 Number of I/O Submission Queues: 64 00:11:57.951 Number of I/O Completion Queues: 64 00:11:57.951 00:11:57.951 ZNS Specific Controller Data 00:11:57.951 ============================ 00:11:57.951 Zone Append Size Limit: 0 00:11:57.951 00:11:57.951 00:11:57.951 Active Namespaces 00:11:57.951 ================= 00:11:57.951 Namespace ID:1 00:11:57.951 Error Recovery Timeout: Unlimited 00:11:57.951 Command Set Identifier: NVM (00h) 00:11:57.951 Deallocate: Supported 00:11:57.951 Deallocated/Unwritten Error: Supported 00:11:57.951 Deallocated Read Value: All 0x00 00:11:57.951 Deallocate in Write Zeroes: Not Supported 00:11:57.951 Deallocated Guard Field: 0xFFFF 00:11:57.951 Flush: Supported 00:11:57.951 Reservation: Not Supported 00:11:57.951 Namespace Sharing Capabilities: Private 00:11:57.951 Size (in LBAs): 1310720 (5GiB) 00:11:57.951 Capacity (in LBAs): 1310720 (5GiB) 00:11:57.951 Utilization (in LBAs): 1310720 (5GiB) 00:11:57.951 Thin Provisioning: Not Supported 00:11:57.951 Per-NS Atomic Units: No 00:11:57.951 Maximum Single Source Range Length: 128 00:11:57.951 Maximum Copy Length: 128 00:11:57.951 Maximum Source Range Count: 128 00:11:57.951 NGUID/EUI64 Never Reused: No 00:11:57.951 Namespace Write Protected: No 00:11:57.951 Number of LBA Formats: 8 00:11:57.951 Current LBA Format: LBA Format #04 00:11:57.951 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.951 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.951 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.951 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.951 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.951 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.951 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.951 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.951 00:11:57.951 12:33:06 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.951 12:33:06 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:11:58.210 ===================================================== 00:11:58.210 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:58.210 ===================================================== 00:11:58.210 Controller Capabilities/Features 00:11:58.210 ================================ 00:11:58.210 Vendor ID: 1b36 00:11:58.210 Subsystem Vendor ID: 1af4 00:11:58.210 Serial Number: 12342 00:11:58.210 Model Number: QEMU NVMe Ctrl 00:11:58.210 Firmware Version: 8.0.0 00:11:58.210 Recommended Arb Burst: 6 00:11:58.210 IEEE OUI Identifier: 00 54 52 00:11:58.210 Multi-path I/O 00:11:58.210 May have multiple subsystem ports: No 00:11:58.210 May have multiple controllers: No 00:11:58.210 Associated with SR-IOV VF: No 00:11:58.210 Max Data Transfer Size: 524288 00:11:58.210 Max Number of Namespaces: 256 00:11:58.210 Max Number of I/O Queues: 64 00:11:58.210 NVMe Specification Version (VS): 1.4 00:11:58.210 NVMe Specification Version (Identify): 1.4 00:11:58.210 Maximum Queue Entries: 2048 00:11:58.210 Contiguous Queues Required: Yes 00:11:58.210 Arbitration Mechanisms Supported 00:11:58.210 Weighted Round Robin: Not Supported 00:11:58.210 Vendor Specific: Not Supported 00:11:58.210 Reset Timeout: 7500 ms 00:11:58.210 Doorbell Stride: 4 bytes 00:11:58.210 NVM Subsystem Reset: Not Supported 00:11:58.210 Command Sets Supported 00:11:58.210 NVM Command Set: Supported 00:11:58.210 Boot Partition: Not Supported 00:11:58.210 Memory Page Size Minimum: 4096 bytes 00:11:58.210 Memory Page Size Maximum: 65536 bytes 00:11:58.210 Persistent Memory Region: Not Supported 00:11:58.210 Optional Asynchronous Events Supported 00:11:58.210 Namespace Attribute Notices: Supported 00:11:58.210 Firmware Activation Notices: Not Supported 00:11:58.210 ANA Change Notices: Not Supported 00:11:58.210 PLE Aggregate Log Change Notices: Not Supported 00:11:58.210 LBA Status Info Alert Notices: Not Supported 00:11:58.210 EGE Aggregate Log Change Notices: Not Supported 00:11:58.210 Normal NVM Subsystem Shutdown event: Not Supported 00:11:58.210 Zone Descriptor Change Notices: Not Supported 00:11:58.210 Discovery Log Change Notices: Not Supported 00:11:58.210 Controller Attributes 00:11:58.210 128-bit Host Identifier: Not Supported 00:11:58.210 Non-Operational Permissive Mode: Not Supported 00:11:58.210 NVM Sets: Not Supported 00:11:58.210 Read Recovery Levels: Not Supported 00:11:58.210 Endurance Groups: Not Supported 00:11:58.210 Predictable Latency Mode: Not Supported 00:11:58.210 Traffic Based Keep ALive: Not Supported 00:11:58.210 Namespace Granularity: Not Supported 00:11:58.210 SQ Associations: Not Supported 00:11:58.210 UUID List: Not Supported 00:11:58.210 Multi-Domain Subsystem: Not Supported 00:11:58.210 Fixed Capacity Management: Not Supported 00:11:58.210 Variable Capacity Management: Not Supported 00:11:58.210 Delete Endurance Group: Not Supported 00:11:58.210 Delete NVM Set: Not Supported 00:11:58.210 Extended LBA Formats Supported: Supported 00:11:58.210 Flexible Data Placement Supported: Not Supported 00:11:58.210 00:11:58.210 Controller Memory Buffer Support 00:11:58.210 ================================ 00:11:58.210 Supported: No 00:11:58.210 00:11:58.210 Persistent Memory Region Support 00:11:58.210 ================================ 00:11:58.210 Supported: No 00:11:58.210 00:11:58.210 Admin Command Set Attributes 00:11:58.210 ============================ 00:11:58.210 Security Send/Receive: Not Supported 00:11:58.210 Format NVM: Supported 00:11:58.210 Firmware Activate/Download: Not Supported 00:11:58.210 Namespace Management: Supported 00:11:58.210 Device Self-Test: Not Supported 00:11:58.210 Directives: Supported 00:11:58.210 NVMe-MI: Not Supported 00:11:58.210 Virtualization Management: Not Supported 00:11:58.210 Doorbell Buffer Config: Supported 00:11:58.210 Get LBA Status Capability: Not Supported 00:11:58.210 Command & Feature Lockdown Capability: Not Supported 00:11:58.210 Abort Command Limit: 4 00:11:58.210 Async Event Request Limit: 4 00:11:58.210 Number of Firmware Slots: N/A 00:11:58.210 Firmware Slot 1 Read-Only: N/A 00:11:58.210 Firmware Activation Without Reset: N/A 00:11:58.210 Multiple Update Detection Support: N/A 00:11:58.210 Firmware Update Granularity: No Information Provided 00:11:58.210 Per-Namespace SMART Log: Yes 00:11:58.210 Asymmetric Namespace Access Log Page: Not Supported 00:11:58.210 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:58.210 Command Effects Log Page: Supported 00:11:58.210 Get Log Page Extended Data: Supported 00:11:58.210 Telemetry Log Pages: Not Supported 00:11:58.210 Persistent Event Log Pages: Not Supported 00:11:58.210 Supported Log Pages Log Page: May Support 00:11:58.210 Commands Supported & Effects Log Page: Not Supported 00:11:58.210 Feature Identifiers & Effects Log Page:May Support 00:11:58.210 NVMe-MI Commands & Effects Log Page: May Support 00:11:58.210 Data Area 4 for Telemetry Log: Not Supported 00:11:58.210 Error Log Page Entries Supported: 1 00:11:58.210 Keep Alive: Not Supported 00:11:58.210 00:11:58.210 NVM Command Set Attributes 00:11:58.210 ========================== 00:11:58.210 Submission Queue Entry Size 00:11:58.210 Max: 64 00:11:58.210 Min: 64 00:11:58.210 Completion Queue Entry Size 00:11:58.210 Max: 16 00:11:58.210 Min: 16 00:11:58.210 Number of Namespaces: 256 00:11:58.210 Compare Command: Supported 00:11:58.210 Write Uncorrectable Command: Not Supported 00:11:58.210 Dataset Management Command: Supported 00:11:58.210 Write Zeroes Command: Supported 00:11:58.210 Set Features Save Field: Supported 00:11:58.210 Reservations: Not Supported 00:11:58.210 Timestamp: Supported 00:11:58.210 Copy: Supported 00:11:58.210 Volatile Write Cache: Present 00:11:58.210 Atomic Write Unit (Normal): 1 00:11:58.210 Atomic Write Unit (PFail): 1 00:11:58.210 Atomic Compare & Write Unit: 1 00:11:58.210 Fused Compare & Write: Not Supported 00:11:58.210 Scatter-Gather List 00:11:58.210 SGL Command Set: Supported 00:11:58.210 SGL Keyed: Not Supported 00:11:58.210 SGL Bit Bucket Descriptor: Not Supported 00:11:58.211 SGL Metadata Pointer: Not Supported 00:11:58.211 Oversized SGL: Not Supported 00:11:58.211 SGL Metadata Address: Not Supported 00:11:58.211 SGL Offset: Not Supported 00:11:58.211 Transport SGL Data Block: Not Supported 00:11:58.211 Replay Protected Memory Block: Not Supported 00:11:58.211 00:11:58.211 Firmware Slot Information 00:11:58.211 ========================= 00:11:58.211 Active slot: 1 00:11:58.211 Slot 1 Firmware Revision: 1.0 00:11:58.211 00:11:58.211 00:11:58.211 Commands Supported and Effects 00:11:58.211 ============================== 00:11:58.211 Admin Commands 00:11:58.211 -------------- 00:11:58.211 Delete I/O Submission Queue (00h): Supported 00:11:58.211 Create I/O Submission Queue (01h): Supported 00:11:58.211 Get Log Page (02h): Supported 00:11:58.211 Delete I/O Completion Queue (04h): Supported 00:11:58.211 Create I/O Completion Queue (05h): Supported 00:11:58.211 Identify (06h): Supported 00:11:58.211 Abort (08h): Supported 00:11:58.211 Set Features (09h): Supported 00:11:58.211 Get Features (0Ah): Supported 00:11:58.211 Asynchronous Event Request (0Ch): Supported 00:11:58.211 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:58.211 Directive Send (19h): Supported 00:11:58.211 Directive Receive (1Ah): Supported 00:11:58.211 Virtualization Management (1Ch): Supported 00:11:58.211 Doorbell Buffer Config (7Ch): Supported 00:11:58.211 Format NVM (80h): Supported LBA-Change 00:11:58.211 I/O Commands 00:11:58.211 ------------ 00:11:58.211 Flush (00h): Supported LBA-Change 00:11:58.211 Write (01h): Supported LBA-Change 00:11:58.211 Read (02h): Supported 00:11:58.211 Compare (05h): Supported 00:11:58.211 Write Zeroes (08h): Supported LBA-Change 00:11:58.211 Dataset Management (09h): Supported LBA-Change 00:11:58.211 Unknown (0Ch): Supported 00:11:58.211 Unknown (12h): Supported 00:11:58.211 Copy (19h): Supported LBA-Change 00:11:58.211 Unknown (1Dh): Supported LBA-Change 00:11:58.211 00:11:58.211 Error Log 00:11:58.211 ========= 00:11:58.211 00:11:58.211 Arbitration 00:11:58.211 =========== 00:11:58.211 Arbitration Burst: no limit 00:11:58.211 00:11:58.211 Power Management 00:11:58.211 ================ 00:11:58.211 Number of Power States: 1 00:11:58.211 Current Power State: Power State #0 00:11:58.211 Power State #0: 00:11:58.211 Max Power: 25.00 W 00:11:58.211 Non-Operational State: Operational 00:11:58.211 Entry Latency: 16 microseconds 00:11:58.211 Exit Latency: 4 microseconds 00:11:58.211 Relative Read Throughput: 0 00:11:58.211 Relative Read Latency: 0 00:11:58.211 Relative Write Throughput: 0 00:11:58.211 Relative Write Latency: 0 00:11:58.211 Idle Power: Not Reported 00:11:58.211 Active Power: Not Reported 00:11:58.211 Non-Operational Permissive Mode: Not Supported 00:11:58.211 00:11:58.211 Health Information 00:11:58.211 ================== 00:11:58.211 Critical Warnings: 00:11:58.211 Available Spare Space: OK 00:11:58.211 Temperature: OK 00:11:58.211 Device Reliability: OK 00:11:58.211 Read Only: No 00:11:58.211 Volatile Memory Backup: OK 00:11:58.211 Current Temperature: 323 Kelvin (50 Celsius) 00:11:58.211 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:58.211 Available Spare: 0% 00:11:58.211 Available Spare Threshold: 0% 00:11:58.211 Life Percentage Used: 0% 00:11:58.211 Data Units Read: 3630 00:11:58.211 Data Units Written: 1673 00:11:58.211 Host Read Commands: 174237 00:11:58.211 Host Write Commands: 85487 00:11:58.211 Controller Busy Time: 0 minutes 00:11:58.211 Power Cycles: 0 00:11:58.211 Power On Hours: 0 hours 00:11:58.211 Unsafe Shutdowns: 0 00:11:58.211 Unrecoverable Media Errors: 0 00:11:58.211 Lifetime Error Log Entries: 0 00:11:58.211 Warning Temperature Time: 0 minutes 00:11:58.211 Critical Temperature Time: 0 minutes 00:11:58.211 00:11:58.211 Number of Queues 00:11:58.211 ================ 00:11:58.211 Number of I/O Submission Queues: 64 00:11:58.211 Number of I/O Completion Queues: 64 00:11:58.211 00:11:58.211 ZNS Specific Controller Data 00:11:58.211 ============================ 00:11:58.211 Zone Append Size Limit: 0 00:11:58.211 00:11:58.211 00:11:58.211 Active Namespaces 00:11:58.211 ================= 00:11:58.211 Namespace ID:1 00:11:58.211 Error Recovery Timeout: Unlimited 00:11:58.211 Command Set Identifier: NVM (00h) 00:11:58.211 Deallocate: Supported 00:11:58.211 Deallocated/Unwritten Error: Supported 00:11:58.211 Deallocated Read Value: All 0x00 00:11:58.211 Deallocate in Write Zeroes: Not Supported 00:11:58.211 Deallocated Guard Field: 0xFFFF 00:11:58.211 Flush: Supported 00:11:58.211 Reservation: Not Supported 00:11:58.211 Namespace Sharing Capabilities: Private 00:11:58.211 Size (in LBAs): 1048576 (4GiB) 00:11:58.211 Capacity (in LBAs): 1048576 (4GiB) 00:11:58.211 Utilization (in LBAs): 1048576 (4GiB) 00:11:58.211 Thin Provisioning: Not Supported 00:11:58.211 Per-NS Atomic Units: No 00:11:58.211 Maximum Single Source Range Length: 128 00:11:58.211 Maximum Copy Length: 128 00:11:58.211 Maximum Source Range Count: 128 00:11:58.211 NGUID/EUI64 Never Reused: No 00:11:58.211 Namespace Write Protected: No 00:11:58.211 Number of LBA Formats: 8 00:11:58.211 Current LBA Format: LBA Format #04 00:11:58.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.211 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:58.211 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:58.211 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:58.211 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:58.211 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:58.211 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:58.211 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:58.211 00:11:58.211 Namespace ID:2 00:11:58.211 Error Recovery Timeout: Unlimited 00:11:58.211 Command Set Identifier: NVM (00h) 00:11:58.211 Deallocate: Supported 00:11:58.211 Deallocated/Unwritten Error: Supported 00:11:58.211 Deallocated Read Value: All 0x00 00:11:58.211 Deallocate in Write Zeroes: Not Supported 00:11:58.211 Deallocated Guard Field: 0xFFFF 00:11:58.211 Flush: Supported 00:11:58.211 Reservation: Not Supported 00:11:58.211 Namespace Sharing Capabilities: Private 00:11:58.211 Size (in LBAs): 1048576 (4GiB) 00:11:58.211 Capacity (in LBAs): 1048576 (4GiB) 00:11:58.211 Utilization (in LBAs): 1048576 (4GiB) 00:11:58.211 Thin Provisioning: Not Supported 00:11:58.211 Per-NS Atomic Units: No 00:11:58.211 Maximum Single Source Range Length: 128 00:11:58.211 Maximum Copy Length: 128 00:11:58.211 Maximum Source Range Count: 128 00:11:58.211 NGUID/EUI64 Never Reused: No 00:11:58.211 Namespace Write Protected: No 00:11:58.211 Number of LBA Formats: 8 00:11:58.211 Current LBA Format: LBA Format #04 00:11:58.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.211 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:58.211 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:58.211 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:58.211 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:58.211 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:58.211 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:58.211 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:58.211 00:11:58.211 Namespace ID:3 00:11:58.211 Error Recovery Timeout: Unlimited 00:11:58.211 Command Set Identifier: NVM (00h) 00:11:58.211 Deallocate: Supported 00:11:58.211 Deallocated/Unwritten Error: Supported 00:11:58.211 Deallocated Read Value: All 0x00 00:11:58.211 Deallocate in Write Zeroes: Not Supported 00:11:58.211 Deallocated Guard Field: 0xFFFF 00:11:58.211 Flush: Supported 00:11:58.211 Reservation: Not Supported 00:11:58.211 Namespace Sharing Capabilities: Private 00:11:58.211 Size (in LBAs): 1048576 (4GiB) 00:11:58.211 Capacity (in LBAs): 1048576 (4GiB) 00:11:58.211 Utilization (in LBAs): 1048576 (4GiB) 00:11:58.211 Thin Provisioning: Not Supported 00:11:58.211 Per-NS Atomic Units: No 00:11:58.211 Maximum Single Source Range Length: 128 00:11:58.211 Maximum Copy Length: 128 00:11:58.211 Maximum Source Range Count: 128 00:11:58.211 NGUID/EUI64 Never Reused: No 00:11:58.211 Namespace Write Protected: No 00:11:58.211 Number of LBA Formats: 8 00:11:58.211 Current LBA Format: LBA Format #04 00:11:58.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.211 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:58.211 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:58.211 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:58.211 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:58.211 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:58.211 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:58.211 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:58.211 00:11:58.211 12:33:07 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:58.211 12:33:07 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:11:58.470 ===================================================== 00:11:58.470 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:58.470 ===================================================== 00:11:58.470 Controller Capabilities/Features 00:11:58.470 ================================ 00:11:58.470 Vendor ID: 1b36 00:11:58.470 Subsystem Vendor ID: 1af4 00:11:58.470 Serial Number: 12343 00:11:58.470 Model Number: QEMU NVMe Ctrl 00:11:58.470 Firmware Version: 8.0.0 00:11:58.470 Recommended Arb Burst: 6 00:11:58.470 IEEE OUI Identifier: 00 54 52 00:11:58.470 Multi-path I/O 00:11:58.470 May have multiple subsystem ports: No 00:11:58.470 May have multiple controllers: Yes 00:11:58.470 Associated with SR-IOV VF: No 00:11:58.470 Max Data Transfer Size: 524288 00:11:58.470 Max Number of Namespaces: 256 00:11:58.470 Max Number of I/O Queues: 64 00:11:58.470 NVMe Specification Version (VS): 1.4 00:11:58.470 NVMe Specification Version (Identify): 1.4 00:11:58.470 Maximum Queue Entries: 2048 00:11:58.470 Contiguous Queues Required: Yes 00:11:58.470 Arbitration Mechanisms Supported 00:11:58.471 Weighted Round Robin: Not Supported 00:11:58.471 Vendor Specific: Not Supported 00:11:58.471 Reset Timeout: 7500 ms 00:11:58.471 Doorbell Stride: 4 bytes 00:11:58.471 NVM Subsystem Reset: Not Supported 00:11:58.471 Command Sets Supported 00:11:58.471 NVM Command Set: Supported 00:11:58.471 Boot Partition: Not Supported 00:11:58.471 Memory Page Size Minimum: 4096 bytes 00:11:58.471 Memory Page Size Maximum: 65536 bytes 00:11:58.471 Persistent Memory Region: Not Supported 00:11:58.471 Optional Asynchronous Events Supported 00:11:58.471 Namespace Attribute Notices: Supported 00:11:58.471 Firmware Activation Notices: Not Supported 00:11:58.471 ANA Change Notices: Not Supported 00:11:58.471 PLE Aggregate Log Change Notices: Not Supported 00:11:58.471 LBA Status Info Alert Notices: Not Supported 00:11:58.471 EGE Aggregate Log Change Notices: Not Supported 00:11:58.471 Normal NVM Subsystem Shutdown event: Not Supported 00:11:58.471 Zone Descriptor Change Notices: Not Supported 00:11:58.471 Discovery Log Change Notices: Not Supported 00:11:58.471 Controller Attributes 00:11:58.471 128-bit Host Identifier: Not Supported 00:11:58.471 Non-Operational Permissive Mode: Not Supported 00:11:58.471 NVM Sets: Not Supported 00:11:58.471 Read Recovery Levels: Not Supported 00:11:58.471 Endurance Groups: Supported 00:11:58.471 Predictable Latency Mode: Not Supported 00:11:58.471 Traffic Based Keep ALive: Not Supported 00:11:58.471 Namespace Granularity: Not Supported 00:11:58.471 SQ Associations: Not Supported 00:11:58.471 UUID List: Not Supported 00:11:58.471 Multi-Domain Subsystem: Not Supported 00:11:58.471 Fixed Capacity Management: Not Supported 00:11:58.471 Variable Capacity Management: Not Supported 00:11:58.471 Delete Endurance Group: Not Supported 00:11:58.471 Delete NVM Set: Not Supported 00:11:58.471 Extended LBA Formats Supported: Supported 00:11:58.471 Flexible Data Placement Supported: Supported 00:11:58.471 00:11:58.471 Controller Memory Buffer Support 00:11:58.471 ================================ 00:11:58.471 Supported: No 00:11:58.471 00:11:58.471 Persistent Memory Region Support 00:11:58.471 ================================ 00:11:58.471 Supported: No 00:11:58.471 00:11:58.471 Admin Command Set Attributes 00:11:58.471 ============================ 00:11:58.471 Security Send/Receive: Not Supported 00:11:58.471 Format NVM: Supported 00:11:58.471 Firmware Activate/Download: Not Supported 00:11:58.471 Namespace Management: Supported 00:11:58.471 Device Self-Test: Not Supported 00:11:58.471 Directives: Supported 00:11:58.471 NVMe-MI: Not Supported 00:11:58.471 Virtualization Management: Not Supported 00:11:58.471 Doorbell Buffer Config: Supported 00:11:58.471 Get LBA Status Capability: Not Supported 00:11:58.471 Command & Feature Lockdown Capability: Not Supported 00:11:58.471 Abort Command Limit: 4 00:11:58.471 Async Event Request Limit: 4 00:11:58.471 Number of Firmware Slots: N/A 00:11:58.471 Firmware Slot 1 Read-Only: N/A 00:11:58.471 Firmware Activation Without Reset: N/A 00:11:58.471 Multiple Update Detection Support: N/A 00:11:58.471 Firmware Update Granularity: No Information Provided 00:11:58.471 Per-Namespace SMART Log: Yes 00:11:58.471 Asymmetric Namespace Access Log Page: Not Supported 00:11:58.471 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:58.471 Command Effects Log Page: Supported 00:11:58.471 Get Log Page Extended Data: Supported 00:11:58.471 Telemetry Log Pages: Not Supported 00:11:58.471 Persistent Event Log Pages: Not Supported 00:11:58.471 Supported Log Pages Log Page: May Support 00:11:58.471 Commands Supported & Effects Log Page: Not Supported 00:11:58.471 Feature Identifiers & Effects Log Page:May Support 00:11:58.471 NVMe-MI Commands & Effects Log Page: May Support 00:11:58.471 Data Area 4 for Telemetry Log: Not Supported 00:11:58.471 Error Log Page Entries Supported: 1 00:11:58.471 Keep Alive: Not Supported 00:11:58.471 00:11:58.471 NVM Command Set Attributes 00:11:58.471 ========================== 00:11:58.471 Submission Queue Entry Size 00:11:58.471 Max: 64 00:11:58.471 Min: 64 00:11:58.471 Completion Queue Entry Size 00:11:58.471 Max: 16 00:11:58.471 Min: 16 00:11:58.471 Number of Namespaces: 256 00:11:58.471 Compare Command: Supported 00:11:58.471 Write Uncorrectable Command: Not Supported 00:11:58.471 Dataset Management Command: Supported 00:11:58.471 Write Zeroes Command: Supported 00:11:58.471 Set Features Save Field: Supported 00:11:58.471 Reservations: Not Supported 00:11:58.471 Timestamp: Supported 00:11:58.471 Copy: Supported 00:11:58.471 Volatile Write Cache: Present 00:11:58.471 Atomic Write Unit (Normal): 1 00:11:58.471 Atomic Write Unit (PFail): 1 00:11:58.471 Atomic Compare & Write Unit: 1 00:11:58.471 Fused Compare & Write: Not Supported 00:11:58.471 Scatter-Gather List 00:11:58.471 SGL Command Set: Supported 00:11:58.471 SGL Keyed: Not Supported 00:11:58.471 SGL Bit Bucket Descriptor: Not Supported 00:11:58.471 SGL Metadata Pointer: Not Supported 00:11:58.471 Oversized SGL: Not Supported 00:11:58.471 SGL Metadata Address: Not Supported 00:11:58.471 SGL Offset: Not Supported 00:11:58.471 Transport SGL Data Block: Not Supported 00:11:58.471 Replay Protected Memory Block: Not Supported 00:11:58.471 00:11:58.471 Firmware Slot Information 00:11:58.471 ========================= 00:11:58.471 Active slot: 1 00:11:58.471 Slot 1 Firmware Revision: 1.0 00:11:58.471 00:11:58.471 00:11:58.471 Commands Supported and Effects 00:11:58.471 ============================== 00:11:58.471 Admin Commands 00:11:58.471 -------------- 00:11:58.471 Delete I/O Submission Queue (00h): Supported 00:11:58.471 Create I/O Submission Queue (01h): Supported 00:11:58.471 Get Log Page (02h): Supported 00:11:58.471 Delete I/O Completion Queue (04h): Supported 00:11:58.471 Create I/O Completion Queue (05h): Supported 00:11:58.471 Identify (06h): Supported 00:11:58.471 Abort (08h): Supported 00:11:58.471 Set Features (09h): Supported 00:11:58.471 Get Features (0Ah): Supported 00:11:58.471 Asynchronous Event Request (0Ch): Supported 00:11:58.471 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:58.471 Directive Send (19h): Supported 00:11:58.471 Directive Receive (1Ah): Supported 00:11:58.471 Virtualization Management (1Ch): Supported 00:11:58.471 Doorbell Buffer Config (7Ch): Supported 00:11:58.471 Format NVM (80h): Supported LBA-Change 00:11:58.471 I/O Commands 00:11:58.471 ------------ 00:11:58.471 Flush (00h): Supported LBA-Change 00:11:58.471 Write (01h): Supported LBA-Change 00:11:58.471 Read (02h): Supported 00:11:58.471 Compare (05h): Supported 00:11:58.471 Write Zeroes (08h): Supported LBA-Change 00:11:58.471 Dataset Management (09h): Supported LBA-Change 00:11:58.471 Unknown (0Ch): Supported 00:11:58.471 Unknown (12h): Supported 00:11:58.471 Copy (19h): Supported LBA-Change 00:11:58.471 Unknown (1Dh): Supported LBA-Change 00:11:58.471 00:11:58.471 Error Log 00:11:58.471 ========= 00:11:58.471 00:11:58.471 Arbitration 00:11:58.471 =========== 00:11:58.471 Arbitration Burst: no limit 00:11:58.471 00:11:58.471 Power Management 00:11:58.471 ================ 00:11:58.471 Number of Power States: 1 00:11:58.471 Current Power State: Power State #0 00:11:58.471 Power State #0: 00:11:58.471 Max Power: 25.00 W 00:11:58.471 Non-Operational State: Operational 00:11:58.471 Entry Latency: 16 microseconds 00:11:58.471 Exit Latency: 4 microseconds 00:11:58.471 Relative Read Throughput: 0 00:11:58.471 Relative Read Latency: 0 00:11:58.471 Relative Write Throughput: 0 00:11:58.471 Relative Write Latency: 0 00:11:58.471 Idle Power: Not Reported 00:11:58.471 Active Power: Not Reported 00:11:58.471 Non-Operational Permissive Mode: Not Supported 00:11:58.471 00:11:58.471 Health Information 00:11:58.471 ================== 00:11:58.471 Critical Warnings: 00:11:58.471 Available Spare Space: OK 00:11:58.471 Temperature: OK 00:11:58.471 Device Reliability: OK 00:11:58.471 Read Only: No 00:11:58.471 Volatile Memory Backup: OK 00:11:58.471 Current Temperature: 323 Kelvin (50 Celsius) 00:11:58.471 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:58.471 Available Spare: 0% 00:11:58.471 Available Spare Threshold: 0% 00:11:58.471 Life Percentage Used: 0% 00:11:58.471 Data Units Read: 1188 00:11:58.471 Data Units Written: 566 00:11:58.471 Host Read Commands: 57380 00:11:58.471 Host Write Commands: 28582 00:11:58.471 Controller Busy Time: 0 minutes 00:11:58.471 Power Cycles: 0 00:11:58.471 Power On Hours: 0 hours 00:11:58.471 Unsafe Shutdowns: 0 00:11:58.471 Unrecoverable Media Errors: 0 00:11:58.471 Lifetime Error Log Entries: 0 00:11:58.471 Warning Temperature Time: 0 minutes 00:11:58.471 Critical Temperature Time: 0 minutes 00:11:58.472 00:11:58.472 Number of Queues 00:11:58.472 ================ 00:11:58.472 Number of I/O Submission Queues: 64 00:11:58.472 Number of I/O Completion Queues: 64 00:11:58.472 00:11:58.472 ZNS Specific Controller Data 00:11:58.472 ============================ 00:11:58.472 Zone Append Size Limit: 0 00:11:58.472 00:11:58.472 00:11:58.472 Active Namespaces 00:11:58.472 ================= 00:11:58.472 Namespace ID:1 00:11:58.472 Error Recovery Timeout: Unlimited 00:11:58.472 Command Set Identifier: NVM (00h) 00:11:58.472 Deallocate: Supported 00:11:58.472 Deallocated/Unwritten Error: Supported 00:11:58.472 Deallocated Read Value: All 0x00 00:11:58.472 Deallocate in Write Zeroes: Not Supported 00:11:58.472 Deallocated Guard Field: 0xFFFF 00:11:58.472 Flush: Supported 00:11:58.472 Reservation: Not Supported 00:11:58.472 Namespace Sharing Capabilities: Multiple Controllers 00:11:58.472 Size (in LBAs): 262144 (1GiB) 00:11:58.472 Capacity (in LBAs): 262144 (1GiB) 00:11:58.472 Utilization (in LBAs): 262144 (1GiB) 00:11:58.472 Thin Provisioning: Not Supported 00:11:58.472 Per-NS Atomic Units: No 00:11:58.472 Maximum Single Source Range Length: 128 00:11:58.472 Maximum Copy Length: 128 00:11:58.472 Maximum Source Range Count: 128 00:11:58.472 NGUID/EUI64 Never Reused: No 00:11:58.472 Namespace Write Protected: No 00:11:58.472 Endurance group ID: 1 00:11:58.472 Number of LBA Formats: 8 00:11:58.472 Current LBA Format: LBA Format #04 00:11:58.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:58.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:58.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:58.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:58.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:58.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:58.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:58.472 00:11:58.472 Get Feature FDP: 00:11:58.472 ================ 00:11:58.472 Enabled: Yes 00:11:58.472 FDP configuration index: 0 00:11:58.472 00:11:58.472 FDP configurations log page 00:11:58.472 =========================== 00:11:58.472 Number of FDP configurations: 1 00:11:58.472 Version: 0 00:11:58.472 Size: 112 00:11:58.472 FDP Configuration Descriptor: 0 00:11:58.472 Descriptor Size: 96 00:11:58.472 Reclaim Group Identifier format: 2 00:11:58.472 FDP Volatile Write Cache: Not Present 00:11:58.472 FDP Configuration: Valid 00:11:58.472 Vendor Specific Size: 0 00:11:58.472 Number of Reclaim Groups: 2 00:11:58.472 Number of Recalim Unit Handles: 8 00:11:58.472 Max Placement Identifiers: 128 00:11:58.472 Number of Namespaces Suppprted: 256 00:11:58.472 Reclaim unit Nominal Size: 6000000 bytes 00:11:58.472 Estimated Reclaim Unit Time Limit: Not Reported 00:11:58.472 RUH Desc #000: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #001: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #002: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #003: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #004: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #005: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #006: RUH Type: Initially Isolated 00:11:58.472 RUH Desc #007: RUH Type: Initially Isolated 00:11:58.472 00:11:58.472 FDP reclaim unit handle usage log page 00:11:58.472 ====================================== 00:11:58.472 Number of Reclaim Unit Handles: 8 00:11:58.472 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:58.472 RUH Usage Desc #001: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #002: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #003: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #004: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #005: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #006: RUH Attributes: Unused 00:11:58.472 RUH Usage Desc #007: RUH Attributes: Unused 00:11:58.472 00:11:58.472 FDP statistics log page 00:11:58.472 ======================= 00:11:58.472 Host bytes with metadata written: 376270848 00:11:58.472 Media bytes with metadata written: 376352768 00:11:58.472 Media bytes erased: 0 00:11:58.472 00:11:58.472 FDP events log page 00:11:58.472 =================== 00:11:58.472 Number of FDP events: 0 00:11:58.472 00:11:58.472 ************************************ 00:11:58.472 END TEST nvme_identify 00:11:58.472 ************************************ 00:11:58.472 00:11:58.472 real 0m1.611s 00:11:58.472 user 0m0.642s 00:11:58.472 sys 0m0.777s 00:11:58.472 12:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.472 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:11:58.730 12:33:07 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:58.730 12:33:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:58.730 12:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.730 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:11:58.731 ************************************ 00:11:58.731 START TEST nvme_perf 00:11:58.731 ************************************ 00:11:58.731 12:33:07 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:58.731 12:33:07 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:00.107 Initializing NVMe Controllers 00:12:00.107 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:00.107 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:00.107 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:00.107 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:00.107 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:00.107 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:00.107 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:00.107 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:00.107 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:00.107 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:00.107 Initialization complete. Launching workers. 00:12:00.107 ======================================================== 00:12:00.107 Latency(us) 00:12:00.107 Device Information : IOPS MiB/s Average min max 00:12:00.107 PCIE (0000:00:06.0) NSID 1 from core 0: 12342.93 144.64 10362.85 6978.69 44939.42 00:12:00.107 PCIE (0000:00:07.0) NSID 1 from core 0: 12342.93 144.64 10350.11 7219.89 42523.31 00:12:00.107 PCIE (0000:00:09.0) NSID 1 from core 0: 12342.93 144.64 10335.24 7305.39 41496.82 00:12:00.107 PCIE (0000:00:08.0) NSID 1 from core 0: 12342.93 144.64 10318.72 7364.35 39168.74 00:12:00.107 PCIE (0000:00:08.0) NSID 2 from core 0: 12342.93 144.64 10303.12 7333.71 36962.66 00:12:00.107 PCIE (0000:00:08.0) NSID 3 from core 0: 12470.18 146.13 10182.56 7269.86 24800.58 00:12:00.107 ======================================================== 00:12:00.107 Total : 74184.83 869.35 10308.55 6978.69 44939.42 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7477.062us 00:12:00.107 10.00000% : 8281.367us 00:12:00.107 25.00000% : 9055.884us 00:12:00.107 50.00000% : 9889.978us 00:12:00.107 75.00000% : 10783.651us 00:12:00.107 90.00000% : 11736.902us 00:12:00.107 95.00000% : 12988.044us 00:12:00.107 98.00000% : 18230.924us 00:12:00.107 99.00000% : 40036.538us 00:12:00.107 99.50000% : 42657.978us 00:12:00.107 99.90000% : 44564.480us 00:12:00.107 99.99000% : 45041.105us 00:12:00.107 99.99900% : 45041.105us 00:12:00.107 99.99990% : 45041.105us 00:12:00.107 99.99999% : 45041.105us 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7685.585us 00:12:00.107 10.00000% : 8400.524us 00:12:00.107 25.00000% : 9115.462us 00:12:00.107 50.00000% : 9830.400us 00:12:00.107 75.00000% : 10664.495us 00:12:00.107 90.00000% : 11677.324us 00:12:00.107 95.00000% : 12749.731us 00:12:00.107 98.00000% : 19065.018us 00:12:00.107 99.00000% : 37891.724us 00:12:00.107 99.50000% : 40274.851us 00:12:00.107 99.90000% : 42181.353us 00:12:00.107 99.99000% : 42657.978us 00:12:00.107 99.99900% : 42657.978us 00:12:00.107 99.99990% : 42657.978us 00:12:00.107 99.99999% : 42657.978us 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7685.585us 00:12:00.107 10.00000% : 8460.102us 00:12:00.107 25.00000% : 9115.462us 00:12:00.107 50.00000% : 9830.400us 00:12:00.107 75.00000% : 10664.495us 00:12:00.107 90.00000% : 11677.324us 00:12:00.107 95.00000% : 12630.575us 00:12:00.107 98.00000% : 18707.549us 00:12:00.107 99.00000% : 36938.473us 00:12:00.107 99.50000% : 39321.600us 00:12:00.107 99.90000% : 41228.102us 00:12:00.107 99.99000% : 41466.415us 00:12:00.107 99.99900% : 41704.727us 00:12:00.107 99.99990% : 41704.727us 00:12:00.107 99.99999% : 41704.727us 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7745.164us 00:12:00.107 10.00000% : 8460.102us 00:12:00.107 25.00000% : 9115.462us 00:12:00.107 50.00000% : 9830.400us 00:12:00.107 75.00000% : 10664.495us 00:12:00.107 90.00000% : 11736.902us 00:12:00.107 95.00000% : 12690.153us 00:12:00.107 98.00000% : 18826.705us 00:12:00.107 99.00000% : 34555.345us 00:12:00.107 99.50000% : 36938.473us 00:12:00.107 99.90000% : 38844.975us 00:12:00.107 99.99000% : 39321.600us 00:12:00.107 99.99900% : 39321.600us 00:12:00.107 99.99990% : 39321.600us 00:12:00.107 99.99999% : 39321.600us 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7685.585us 00:12:00.107 10.00000% : 8460.102us 00:12:00.107 25.00000% : 9115.462us 00:12:00.107 50.00000% : 9889.978us 00:12:00.107 75.00000% : 10724.073us 00:12:00.107 90.00000% : 11915.636us 00:12:00.107 95.00000% : 13524.247us 00:12:00.107 98.00000% : 15728.640us 00:12:00.107 99.00000% : 32410.531us 00:12:00.107 99.50000% : 34793.658us 00:12:00.107 99.90000% : 36700.160us 00:12:00.107 99.99000% : 36938.473us 00:12:00.107 99.99900% : 37176.785us 00:12:00.107 99.99990% : 37176.785us 00:12:00.107 99.99999% : 37176.785us 00:12:00.107 00:12:00.107 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:12:00.107 ================================================================================= 00:12:00.107 1.00000% : 7685.585us 00:12:00.107 10.00000% : 8460.102us 00:12:00.107 25.00000% : 9115.462us 00:12:00.107 50.00000% : 9830.400us 00:12:00.107 75.00000% : 10724.073us 00:12:00.107 90.00000% : 11975.215us 00:12:00.107 95.00000% : 13405.091us 00:12:00.107 98.00000% : 16324.422us 00:12:00.107 99.00000% : 20137.425us 00:12:00.107 99.50000% : 22520.553us 00:12:00.107 99.90000% : 24427.055us 00:12:00.107 99.99000% : 24784.524us 00:12:00.107 99.99900% : 24903.680us 00:12:00.107 99.99990% : 24903.680us 00:12:00.107 99.99999% : 24903.680us 00:12:00.107 00:12:00.107 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:00.107 ============================================================================== 00:12:00.107 Range in us Cumulative IO count 00:12:00.107 6970.647 - 7000.436: 0.0161% ( 2) 00:12:00.107 7000.436 - 7030.225: 0.0483% ( 4) 00:12:00.107 7030.225 - 7060.015: 0.0725% ( 3) 00:12:00.107 7060.015 - 7089.804: 0.0886% ( 2) 00:12:00.107 7089.804 - 7119.593: 0.1047% ( 2) 00:12:00.107 7119.593 - 7149.382: 0.1208% ( 2) 00:12:00.107 7149.382 - 7179.171: 0.1691% ( 6) 00:12:00.108 7179.171 - 7208.960: 0.1933% ( 3) 00:12:00.108 7208.960 - 7238.749: 0.2336% ( 5) 00:12:00.108 7238.749 - 7268.538: 0.3061% ( 9) 00:12:00.108 7268.538 - 7298.327: 0.3705% ( 8) 00:12:00.108 7298.327 - 7328.116: 0.4510% ( 10) 00:12:00.108 7328.116 - 7357.905: 0.5638% ( 14) 00:12:00.108 7357.905 - 7387.695: 0.6765% ( 14) 00:12:00.108 7387.695 - 7417.484: 0.8376% ( 20) 00:12:00.108 7417.484 - 7447.273: 0.9987% ( 20) 00:12:00.108 7447.273 - 7477.062: 1.1759% ( 22) 00:12:00.108 7477.062 - 7506.851: 1.3692% ( 24) 00:12:00.108 7506.851 - 7536.640: 1.5867% ( 27) 00:12:00.108 7536.640 - 7566.429: 1.8605% ( 34) 00:12:00.108 7566.429 - 7596.218: 2.0780% ( 27) 00:12:00.108 7596.218 - 7626.007: 2.3438% ( 33) 00:12:00.108 7626.007 - 7685.585: 2.9398% ( 74) 00:12:00.108 7685.585 - 7745.164: 3.5599% ( 77) 00:12:00.108 7745.164 - 7804.742: 4.2445% ( 85) 00:12:00.108 7804.742 - 7864.320: 4.8647% ( 77) 00:12:00.108 7864.320 - 7923.898: 5.6298% ( 95) 00:12:00.108 7923.898 - 7983.476: 6.3064% ( 84) 00:12:00.108 7983.476 - 8043.055: 7.1118% ( 100) 00:12:00.108 8043.055 - 8102.633: 7.9253% ( 101) 00:12:00.108 8102.633 - 8162.211: 8.7065% ( 97) 00:12:00.108 8162.211 - 8221.789: 9.5200% ( 101) 00:12:00.108 8221.789 - 8281.367: 10.4381% ( 114) 00:12:00.108 8281.367 - 8340.945: 11.3563% ( 114) 00:12:00.108 8340.945 - 8400.524: 12.3148% ( 119) 00:12:00.108 8400.524 - 8460.102: 13.3618% ( 130) 00:12:00.108 8460.102 - 8519.680: 14.3363% ( 121) 00:12:00.108 8519.680 - 8579.258: 15.4639% ( 140) 00:12:00.108 8579.258 - 8638.836: 16.6640% ( 149) 00:12:00.108 8638.836 - 8698.415: 17.9043% ( 154) 00:12:00.108 8698.415 - 8757.993: 19.1769% ( 158) 00:12:00.108 8757.993 - 8817.571: 20.5380% ( 169) 00:12:00.108 8817.571 - 8877.149: 21.8106% ( 158) 00:12:00.108 8877.149 - 8936.727: 23.2603% ( 180) 00:12:00.108 8936.727 - 8996.305: 24.6537% ( 173) 00:12:00.108 8996.305 - 9055.884: 26.2001% ( 192) 00:12:00.108 9055.884 - 9115.462: 27.8351% ( 203) 00:12:00.108 9115.462 - 9175.040: 29.5023% ( 207) 00:12:00.108 9175.040 - 9234.618: 31.1614% ( 206) 00:12:00.108 9234.618 - 9294.196: 32.9655% ( 224) 00:12:00.108 9294.196 - 9353.775: 34.7938% ( 227) 00:12:00.108 9353.775 - 9413.353: 36.5335% ( 216) 00:12:00.108 9413.353 - 9472.931: 38.4101% ( 233) 00:12:00.108 9472.931 - 9532.509: 40.2223% ( 225) 00:12:00.108 9532.509 - 9592.087: 42.1392% ( 238) 00:12:00.108 9592.087 - 9651.665: 44.0077% ( 232) 00:12:00.108 9651.665 - 9711.244: 45.8441% ( 228) 00:12:00.108 9711.244 - 9770.822: 47.7126% ( 232) 00:12:00.108 9770.822 - 9830.400: 49.6053% ( 235) 00:12:00.108 9830.400 - 9889.978: 51.4417% ( 228) 00:12:00.108 9889.978 - 9949.556: 53.2780% ( 228) 00:12:00.108 9949.556 - 10009.135: 55.1869% ( 237) 00:12:00.108 10009.135 - 10068.713: 57.0312% ( 229) 00:12:00.108 10068.713 - 10128.291: 58.7870% ( 218) 00:12:00.108 10128.291 - 10187.869: 60.5912% ( 224) 00:12:00.108 10187.869 - 10247.447: 62.2503% ( 206) 00:12:00.108 10247.447 - 10307.025: 63.9417% ( 210) 00:12:00.108 10307.025 - 10366.604: 65.5445% ( 199) 00:12:00.108 10366.604 - 10426.182: 67.1311% ( 197) 00:12:00.108 10426.182 - 10485.760: 68.7017% ( 195) 00:12:00.108 10485.760 - 10545.338: 70.1997% ( 186) 00:12:00.108 10545.338 - 10604.916: 71.5448% ( 167) 00:12:00.108 10604.916 - 10664.495: 72.9543% ( 175) 00:12:00.108 10664.495 - 10724.073: 74.3315% ( 171) 00:12:00.108 10724.073 - 10783.651: 75.6524% ( 164) 00:12:00.108 10783.651 - 10843.229: 76.9088% ( 156) 00:12:00.108 10843.229 - 10902.807: 78.2941% ( 172) 00:12:00.108 10902.807 - 10962.385: 79.3412% ( 130) 00:12:00.108 10962.385 - 11021.964: 80.5332% ( 148) 00:12:00.108 11021.964 - 11081.542: 81.5561% ( 127) 00:12:00.108 11081.542 - 11141.120: 82.6997% ( 142) 00:12:00.108 11141.120 - 11200.698: 83.7387% ( 129) 00:12:00.108 11200.698 - 11260.276: 84.7294% ( 123) 00:12:00.108 11260.276 - 11319.855: 85.6878% ( 119) 00:12:00.108 11319.855 - 11379.433: 86.5093% ( 102) 00:12:00.108 11379.433 - 11439.011: 87.1939% ( 85) 00:12:00.108 11439.011 - 11498.589: 87.9269% ( 91) 00:12:00.108 11498.589 - 11558.167: 88.5229% ( 74) 00:12:00.108 11558.167 - 11617.745: 89.0061% ( 60) 00:12:00.108 11617.745 - 11677.324: 89.5135% ( 63) 00:12:00.108 11677.324 - 11736.902: 90.0129% ( 62) 00:12:00.108 11736.902 - 11796.480: 90.3834% ( 46) 00:12:00.108 11796.480 - 11856.058: 90.8102% ( 53) 00:12:00.108 11856.058 - 11915.636: 91.0760% ( 33) 00:12:00.108 11915.636 - 11975.215: 91.3499% ( 34) 00:12:00.108 11975.215 - 12034.793: 91.6076% ( 32) 00:12:00.108 12034.793 - 12094.371: 91.8814% ( 34) 00:12:00.108 12094.371 - 12153.949: 92.1472% ( 33) 00:12:00.108 12153.949 - 12213.527: 92.3969% ( 31) 00:12:00.108 12213.527 - 12273.105: 92.6224% ( 28) 00:12:00.108 12273.105 - 12332.684: 92.8963% ( 34) 00:12:00.108 12332.684 - 12392.262: 93.1057% ( 26) 00:12:00.108 12392.262 - 12451.840: 93.3070% ( 25) 00:12:00.108 12451.840 - 12511.418: 93.5325% ( 28) 00:12:00.108 12511.418 - 12570.996: 93.7339% ( 25) 00:12:00.108 12570.996 - 12630.575: 93.9594% ( 28) 00:12:00.108 12630.575 - 12690.153: 94.1124% ( 19) 00:12:00.108 12690.153 - 12749.731: 94.3138% ( 25) 00:12:00.108 12749.731 - 12809.309: 94.5151% ( 25) 00:12:00.108 12809.309 - 12868.887: 94.6762% ( 20) 00:12:00.108 12868.887 - 12928.465: 94.8776% ( 25) 00:12:00.108 12928.465 - 12988.044: 95.0467% ( 21) 00:12:00.108 12988.044 - 13047.622: 95.2078% ( 20) 00:12:00.108 13047.622 - 13107.200: 95.3528% ( 18) 00:12:00.108 13107.200 - 13166.778: 95.4897% ( 17) 00:12:00.108 13166.778 - 13226.356: 95.6910% ( 25) 00:12:00.108 13226.356 - 13285.935: 95.8119% ( 15) 00:12:00.108 13285.935 - 13345.513: 95.9488% ( 17) 00:12:00.108 13345.513 - 13405.091: 96.1018% ( 19) 00:12:00.108 13405.091 - 13464.669: 96.2226% ( 15) 00:12:00.108 13464.669 - 13524.247: 96.3273% ( 13) 00:12:00.108 13524.247 - 13583.825: 96.4240% ( 12) 00:12:00.108 13583.825 - 13643.404: 96.4965% ( 9) 00:12:00.108 13643.404 - 13702.982: 96.5448% ( 6) 00:12:00.108 13702.982 - 13762.560: 96.6092% ( 8) 00:12:00.108 13762.560 - 13822.138: 96.6334% ( 3) 00:12:00.108 13822.138 - 13881.716: 96.6656% ( 4) 00:12:00.108 13881.716 - 13941.295: 96.7059% ( 5) 00:12:00.108 13941.295 - 14000.873: 96.7461% ( 5) 00:12:00.108 14000.873 - 14060.451: 96.7784% ( 4) 00:12:00.108 14060.451 - 14120.029: 96.8267% ( 6) 00:12:00.108 14120.029 - 14179.607: 96.8589% ( 4) 00:12:00.108 14179.607 - 14239.185: 96.8831% ( 3) 00:12:00.108 14239.185 - 14298.764: 96.9072% ( 3) 00:12:00.108 15966.953 - 16086.109: 96.9314% ( 3) 00:12:00.108 16086.109 - 16205.265: 96.9555% ( 3) 00:12:00.108 16205.265 - 16324.422: 96.9878% ( 4) 00:12:00.108 16324.422 - 16443.578: 97.0280% ( 5) 00:12:00.108 16443.578 - 16562.735: 97.0925% ( 8) 00:12:00.108 16562.735 - 16681.891: 97.1569% ( 8) 00:12:00.108 16681.891 - 16801.047: 97.2133% ( 7) 00:12:00.108 16801.047 - 16920.204: 97.2777% ( 8) 00:12:00.108 16920.204 - 17039.360: 97.3502% ( 9) 00:12:00.108 17039.360 - 17158.516: 97.4227% ( 9) 00:12:00.108 17158.516 - 17277.673: 97.4952% ( 9) 00:12:00.108 17277.673 - 17396.829: 97.5596% ( 8) 00:12:00.108 17396.829 - 17515.985: 97.6240% ( 8) 00:12:00.108 17515.985 - 17635.142: 97.6965% ( 9) 00:12:00.108 17635.142 - 17754.298: 97.7771% ( 10) 00:12:00.108 17754.298 - 17873.455: 97.8415% ( 8) 00:12:00.108 17873.455 - 17992.611: 97.9140% ( 9) 00:12:00.108 17992.611 - 18111.767: 97.9704% ( 7) 00:12:00.108 18111.767 - 18230.924: 98.0509% ( 10) 00:12:00.108 18230.924 - 18350.080: 98.1234% ( 9) 00:12:00.108 18350.080 - 18469.236: 98.1959% ( 9) 00:12:00.108 18469.236 - 18588.393: 98.2684% ( 9) 00:12:00.108 18588.393 - 18707.549: 98.3409% ( 9) 00:12:00.108 18707.549 - 18826.705: 98.4294% ( 11) 00:12:00.108 18826.705 - 18945.862: 98.4858% ( 7) 00:12:00.108 18945.862 - 19065.018: 98.5583% ( 9) 00:12:00.108 19065.018 - 19184.175: 98.6308% ( 9) 00:12:00.108 19184.175 - 19303.331: 98.7033% ( 9) 00:12:00.108 19303.331 - 19422.487: 98.7677% ( 8) 00:12:00.108 19422.487 - 19541.644: 98.8563% ( 11) 00:12:00.108 19541.644 - 19660.800: 98.9046% ( 6) 00:12:00.108 19660.800 - 19779.956: 98.9288% ( 3) 00:12:00.108 19779.956 - 19899.113: 98.9691% ( 5) 00:12:00.108 39559.913 - 39798.225: 98.9852% ( 2) 00:12:00.108 39798.225 - 40036.538: 99.0255% ( 5) 00:12:00.108 40036.538 - 40274.851: 99.0496% ( 3) 00:12:00.108 40274.851 - 40513.164: 99.0979% ( 6) 00:12:00.108 40513.164 - 40751.476: 99.1382% ( 5) 00:12:00.108 40751.476 - 40989.789: 99.1865% ( 6) 00:12:00.108 40989.789 - 41228.102: 99.2349% ( 6) 00:12:00.108 41228.102 - 41466.415: 99.2832% ( 6) 00:12:00.108 41466.415 - 41704.727: 99.3315% ( 6) 00:12:00.108 41704.727 - 41943.040: 99.3718% ( 5) 00:12:00.108 41943.040 - 42181.353: 99.4282% ( 7) 00:12:00.108 42181.353 - 42419.665: 99.4684% ( 5) 00:12:00.108 42419.665 - 42657.978: 99.5248% ( 7) 00:12:00.108 42657.978 - 42896.291: 99.5731% ( 6) 00:12:00.108 42896.291 - 43134.604: 99.6215% ( 6) 00:12:00.108 43134.604 - 43372.916: 99.6698% ( 6) 00:12:00.108 43372.916 - 43611.229: 99.7181% ( 6) 00:12:00.108 43611.229 - 43849.542: 99.7745% ( 7) 00:12:00.108 43849.542 - 44087.855: 99.8228% ( 6) 00:12:00.108 44087.855 - 44326.167: 99.8711% ( 6) 00:12:00.108 44326.167 - 44564.480: 99.9275% ( 7) 00:12:00.108 44564.480 - 44802.793: 99.9758% ( 6) 00:12:00.108 44802.793 - 45041.105: 100.0000% ( 3) 00:12:00.108 00:12:00.108 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:12:00.108 ============================================================================== 00:12:00.108 Range in us Cumulative IO count 00:12:00.108 7208.960 - 7238.749: 0.0322% ( 4) 00:12:00.108 7238.749 - 7268.538: 0.0564% ( 3) 00:12:00.108 7268.538 - 7298.327: 0.0725% ( 2) 00:12:00.108 7298.327 - 7328.116: 0.0966% ( 3) 00:12:00.109 7328.116 - 7357.905: 0.1289% ( 4) 00:12:00.109 7357.905 - 7387.695: 0.1611% ( 4) 00:12:00.109 7387.695 - 7417.484: 0.1772% ( 2) 00:12:00.109 7417.484 - 7447.273: 0.2336% ( 7) 00:12:00.109 7447.273 - 7477.062: 0.2819% ( 6) 00:12:00.109 7477.062 - 7506.851: 0.3544% ( 9) 00:12:00.109 7506.851 - 7536.640: 0.4027% ( 6) 00:12:00.109 7536.640 - 7566.429: 0.5396% ( 17) 00:12:00.109 7566.429 - 7596.218: 0.6604% ( 15) 00:12:00.109 7596.218 - 7626.007: 0.8054% ( 18) 00:12:00.109 7626.007 - 7685.585: 1.1195% ( 39) 00:12:00.109 7685.585 - 7745.164: 1.5544% ( 54) 00:12:00.109 7745.164 - 7804.742: 2.0538% ( 62) 00:12:00.109 7804.742 - 7864.320: 2.5773% ( 65) 00:12:00.109 7864.320 - 7923.898: 3.3022% ( 90) 00:12:00.109 7923.898 - 7983.476: 4.0271% ( 90) 00:12:00.109 7983.476 - 8043.055: 4.7841% ( 94) 00:12:00.109 8043.055 - 8102.633: 5.5573% ( 96) 00:12:00.109 8102.633 - 8162.211: 6.4352% ( 109) 00:12:00.109 8162.211 - 8221.789: 7.2890% ( 106) 00:12:00.109 8221.789 - 8281.367: 8.2072% ( 114) 00:12:00.109 8281.367 - 8340.945: 9.1736% ( 120) 00:12:00.109 8340.945 - 8400.524: 10.1885% ( 126) 00:12:00.109 8400.524 - 8460.102: 11.1630% ( 121) 00:12:00.109 8460.102 - 8519.680: 12.2101% ( 130) 00:12:00.109 8519.680 - 8579.258: 13.3376% ( 140) 00:12:00.109 8579.258 - 8638.836: 14.4088% ( 133) 00:12:00.109 8638.836 - 8698.415: 15.5928% ( 147) 00:12:00.109 8698.415 - 8757.993: 16.8492% ( 156) 00:12:00.109 8757.993 - 8817.571: 18.1298% ( 159) 00:12:00.109 8817.571 - 8877.149: 19.6198% ( 185) 00:12:00.109 8877.149 - 8936.727: 21.1179% ( 186) 00:12:00.109 8936.727 - 8996.305: 22.7287% ( 200) 00:12:00.109 8996.305 - 9055.884: 24.3396% ( 200) 00:12:00.109 9055.884 - 9115.462: 26.0390% ( 211) 00:12:00.109 9115.462 - 9175.040: 27.7867% ( 217) 00:12:00.109 9175.040 - 9234.618: 29.6794% ( 235) 00:12:00.109 9234.618 - 9294.196: 31.6366% ( 243) 00:12:00.109 9294.196 - 9353.775: 33.6018% ( 244) 00:12:00.109 9353.775 - 9413.353: 35.6476% ( 254) 00:12:00.109 9413.353 - 9472.931: 37.6450% ( 248) 00:12:00.109 9472.931 - 9532.509: 39.7310% ( 259) 00:12:00.109 9532.509 - 9592.087: 41.7526% ( 251) 00:12:00.109 9592.087 - 9651.665: 43.8466% ( 260) 00:12:00.109 9651.665 - 9711.244: 46.0132% ( 269) 00:12:00.109 9711.244 - 9770.822: 48.0912% ( 258) 00:12:00.109 9770.822 - 9830.400: 50.2577% ( 269) 00:12:00.109 9830.400 - 9889.978: 52.3599% ( 261) 00:12:00.109 9889.978 - 9949.556: 54.4861% ( 264) 00:12:00.109 9949.556 - 10009.135: 56.5883% ( 261) 00:12:00.109 10009.135 - 10068.713: 58.6018% ( 250) 00:12:00.109 10068.713 - 10128.291: 60.5106% ( 237) 00:12:00.109 10128.291 - 10187.869: 62.4034% ( 235) 00:12:00.109 10187.869 - 10247.447: 64.1914% ( 222) 00:12:00.109 10247.447 - 10307.025: 65.9069% ( 213) 00:12:00.109 10307.025 - 10366.604: 67.5258% ( 201) 00:12:00.109 10366.604 - 10426.182: 69.2413% ( 213) 00:12:00.109 10426.182 - 10485.760: 70.9005% ( 206) 00:12:00.109 10485.760 - 10545.338: 72.5274% ( 202) 00:12:00.109 10545.338 - 10604.916: 74.1785% ( 205) 00:12:00.109 10604.916 - 10664.495: 75.6443% ( 182) 00:12:00.109 10664.495 - 10724.073: 77.0055% ( 169) 00:12:00.109 10724.073 - 10783.651: 78.3908% ( 172) 00:12:00.109 10783.651 - 10843.229: 79.7036% ( 163) 00:12:00.109 10843.229 - 10902.807: 80.9439% ( 154) 00:12:00.109 10902.807 - 10962.385: 82.1440% ( 149) 00:12:00.109 10962.385 - 11021.964: 83.2233% ( 134) 00:12:00.109 11021.964 - 11081.542: 84.1898% ( 120) 00:12:00.109 11081.542 - 11141.120: 85.0435% ( 106) 00:12:00.109 11141.120 - 11200.698: 85.8409% ( 99) 00:12:00.109 11200.698 - 11260.276: 86.5899% ( 93) 00:12:00.109 11260.276 - 11319.855: 87.3389% ( 93) 00:12:00.109 11319.855 - 11379.433: 87.9913% ( 81) 00:12:00.109 11379.433 - 11439.011: 88.6115% ( 77) 00:12:00.109 11439.011 - 11498.589: 89.1511% ( 67) 00:12:00.109 11498.589 - 11558.167: 89.5619% ( 51) 00:12:00.109 11558.167 - 11617.745: 89.9323% ( 46) 00:12:00.109 11617.745 - 11677.324: 90.3028% ( 46) 00:12:00.109 11677.324 - 11736.902: 90.6572% ( 44) 00:12:00.109 11736.902 - 11796.480: 91.0116% ( 44) 00:12:00.109 11796.480 - 11856.058: 91.3338% ( 40) 00:12:00.109 11856.058 - 11915.636: 91.6640% ( 41) 00:12:00.109 11915.636 - 11975.215: 91.9298% ( 33) 00:12:00.109 11975.215 - 12034.793: 92.2197% ( 36) 00:12:00.109 12034.793 - 12094.371: 92.4936% ( 34) 00:12:00.109 12094.371 - 12153.949: 92.7593% ( 33) 00:12:00.109 12153.949 - 12213.527: 93.0171% ( 32) 00:12:00.109 12213.527 - 12273.105: 93.2587% ( 30) 00:12:00.109 12273.105 - 12332.684: 93.5084% ( 31) 00:12:00.109 12332.684 - 12392.262: 93.7258% ( 27) 00:12:00.109 12392.262 - 12451.840: 93.9272% ( 25) 00:12:00.109 12451.840 - 12511.418: 94.1608% ( 29) 00:12:00.109 12511.418 - 12570.996: 94.3863% ( 28) 00:12:00.109 12570.996 - 12630.575: 94.6360% ( 31) 00:12:00.109 12630.575 - 12690.153: 94.8534% ( 27) 00:12:00.109 12690.153 - 12749.731: 95.0789% ( 28) 00:12:00.109 12749.731 - 12809.309: 95.2642% ( 23) 00:12:00.109 12809.309 - 12868.887: 95.4333% ( 21) 00:12:00.109 12868.887 - 12928.465: 95.6024% ( 21) 00:12:00.109 12928.465 - 12988.044: 95.7635% ( 20) 00:12:00.109 12988.044 - 13047.622: 95.8843% ( 15) 00:12:00.109 13047.622 - 13107.200: 96.0132% ( 16) 00:12:00.109 13107.200 - 13166.778: 96.1099% ( 12) 00:12:00.109 13166.778 - 13226.356: 96.2226% ( 14) 00:12:00.109 13226.356 - 13285.935: 96.3354% ( 14) 00:12:00.109 13285.935 - 13345.513: 96.4481% ( 14) 00:12:00.109 13345.513 - 13405.091: 96.5287% ( 10) 00:12:00.109 13405.091 - 13464.669: 96.5689% ( 5) 00:12:00.109 13464.669 - 13524.247: 96.6173% ( 6) 00:12:00.109 13524.247 - 13583.825: 96.6495% ( 4) 00:12:00.109 13583.825 - 13643.404: 96.6978% ( 6) 00:12:00.109 13643.404 - 13702.982: 96.7300% ( 4) 00:12:00.109 13702.982 - 13762.560: 96.7703% ( 5) 00:12:00.109 13762.560 - 13822.138: 96.8106% ( 5) 00:12:00.109 13822.138 - 13881.716: 96.8589% ( 6) 00:12:00.109 13881.716 - 13941.295: 96.8831% ( 3) 00:12:00.109 13941.295 - 14000.873: 96.9072% ( 3) 00:12:00.109 17277.673 - 17396.829: 96.9314% ( 3) 00:12:00.109 17396.829 - 17515.985: 96.9636% ( 4) 00:12:00.109 17515.985 - 17635.142: 97.0119% ( 6) 00:12:00.109 17635.142 - 17754.298: 97.0925% ( 10) 00:12:00.109 17754.298 - 17873.455: 97.1811% ( 11) 00:12:00.109 17873.455 - 17992.611: 97.2616% ( 10) 00:12:00.109 17992.611 - 18111.767: 97.3502% ( 11) 00:12:00.109 18111.767 - 18230.924: 97.4307% ( 10) 00:12:00.109 18230.924 - 18350.080: 97.5113% ( 10) 00:12:00.109 18350.080 - 18469.236: 97.5918% ( 10) 00:12:00.109 18469.236 - 18588.393: 97.6804% ( 11) 00:12:00.109 18588.393 - 18707.549: 97.7610% ( 10) 00:12:00.109 18707.549 - 18826.705: 97.8576% ( 12) 00:12:00.109 18826.705 - 18945.862: 97.9462% ( 11) 00:12:00.109 18945.862 - 19065.018: 98.0348% ( 11) 00:12:00.109 19065.018 - 19184.175: 98.1234% ( 11) 00:12:00.109 19184.175 - 19303.331: 98.2120% ( 11) 00:12:00.109 19303.331 - 19422.487: 98.3086% ( 12) 00:12:00.109 19422.487 - 19541.644: 98.3972% ( 11) 00:12:00.109 19541.644 - 19660.800: 98.4697% ( 9) 00:12:00.109 19660.800 - 19779.956: 98.5664% ( 12) 00:12:00.109 19779.956 - 19899.113: 98.6469% ( 10) 00:12:00.109 19899.113 - 20018.269: 98.7436% ( 12) 00:12:00.109 20018.269 - 20137.425: 98.8322% ( 11) 00:12:00.109 20137.425 - 20256.582: 98.8724% ( 5) 00:12:00.109 20256.582 - 20375.738: 98.9127% ( 5) 00:12:00.109 20375.738 - 20494.895: 98.9530% ( 5) 00:12:00.109 20494.895 - 20614.051: 98.9691% ( 2) 00:12:00.109 37653.411 - 37891.724: 99.0174% ( 6) 00:12:00.109 37891.724 - 38130.036: 99.0577% ( 5) 00:12:00.109 38130.036 - 38368.349: 99.1060% ( 6) 00:12:00.109 38368.349 - 38606.662: 99.1624% ( 7) 00:12:00.109 38606.662 - 38844.975: 99.2107% ( 6) 00:12:00.109 38844.975 - 39083.287: 99.2671% ( 7) 00:12:00.109 39083.287 - 39321.600: 99.3154% ( 6) 00:12:00.109 39321.600 - 39559.913: 99.3637% ( 6) 00:12:00.109 39559.913 - 39798.225: 99.4120% ( 6) 00:12:00.109 39798.225 - 40036.538: 99.4684% ( 7) 00:12:00.109 40036.538 - 40274.851: 99.5168% ( 6) 00:12:00.109 40274.851 - 40513.164: 99.5651% ( 6) 00:12:00.109 40513.164 - 40751.476: 99.6134% ( 6) 00:12:00.109 40751.476 - 40989.789: 99.6617% ( 6) 00:12:00.109 40989.789 - 41228.102: 99.7181% ( 7) 00:12:00.109 41228.102 - 41466.415: 99.7664% ( 6) 00:12:00.109 41466.415 - 41704.727: 99.8228% ( 7) 00:12:00.109 41704.727 - 41943.040: 99.8711% ( 6) 00:12:00.109 41943.040 - 42181.353: 99.9195% ( 6) 00:12:00.109 42181.353 - 42419.665: 99.9758% ( 7) 00:12:00.109 42419.665 - 42657.978: 100.0000% ( 3) 00:12:00.109 00:12:00.109 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:12:00.109 ============================================================================== 00:12:00.109 Range in us Cumulative IO count 00:12:00.109 7298.327 - 7328.116: 0.0161% ( 2) 00:12:00.109 7328.116 - 7357.905: 0.0403% ( 3) 00:12:00.109 7357.905 - 7387.695: 0.0966% ( 7) 00:12:00.109 7387.695 - 7417.484: 0.1369% ( 5) 00:12:00.109 7417.484 - 7447.273: 0.1691% ( 4) 00:12:00.109 7447.273 - 7477.062: 0.2255% ( 7) 00:12:00.109 7477.062 - 7506.851: 0.2819% ( 7) 00:12:00.109 7506.851 - 7536.640: 0.3463% ( 8) 00:12:00.109 7536.640 - 7566.429: 0.4269% ( 10) 00:12:00.109 7566.429 - 7596.218: 0.5316% ( 13) 00:12:00.109 7596.218 - 7626.007: 0.6363% ( 13) 00:12:00.109 7626.007 - 7685.585: 1.0712% ( 54) 00:12:00.109 7685.585 - 7745.164: 1.5142% ( 55) 00:12:00.109 7745.164 - 7804.742: 2.0619% ( 68) 00:12:00.109 7804.742 - 7864.320: 2.7142% ( 81) 00:12:00.109 7864.320 - 7923.898: 3.3747% ( 82) 00:12:00.109 7923.898 - 7983.476: 4.0673% ( 86) 00:12:00.109 7983.476 - 8043.055: 4.8566% ( 98) 00:12:00.109 8043.055 - 8102.633: 5.5815% ( 90) 00:12:00.109 8102.633 - 8162.211: 6.3466% ( 95) 00:12:00.109 8162.211 - 8221.789: 7.2004% ( 106) 00:12:00.109 8221.789 - 8281.367: 8.0702% ( 108) 00:12:00.110 8281.367 - 8340.945: 8.9320% ( 107) 00:12:00.110 8340.945 - 8400.524: 9.8582% ( 115) 00:12:00.110 8400.524 - 8460.102: 10.8328% ( 121) 00:12:00.110 8460.102 - 8519.680: 12.0087% ( 146) 00:12:00.110 8519.680 - 8579.258: 13.1282% ( 139) 00:12:00.110 8579.258 - 8638.836: 14.3524% ( 152) 00:12:00.110 8638.836 - 8698.415: 15.6089% ( 156) 00:12:00.110 8698.415 - 8757.993: 16.7848% ( 146) 00:12:00.110 8757.993 - 8817.571: 18.1459% ( 169) 00:12:00.110 8817.571 - 8877.149: 19.4910% ( 167) 00:12:00.110 8877.149 - 8936.727: 20.9166% ( 177) 00:12:00.110 8936.727 - 8996.305: 22.5596% ( 204) 00:12:00.110 8996.305 - 9055.884: 24.2429% ( 209) 00:12:00.110 9055.884 - 9115.462: 25.9987% ( 218) 00:12:00.110 9115.462 - 9175.040: 27.8592% ( 231) 00:12:00.110 9175.040 - 9234.618: 29.6472% ( 222) 00:12:00.110 9234.618 - 9294.196: 31.5641% ( 238) 00:12:00.110 9294.196 - 9353.775: 33.4971% ( 240) 00:12:00.110 9353.775 - 9413.353: 35.6073% ( 262) 00:12:00.110 9413.353 - 9472.931: 37.6691% ( 256) 00:12:00.110 9472.931 - 9532.509: 39.8276% ( 268) 00:12:00.110 9532.509 - 9592.087: 41.9459% ( 263) 00:12:00.110 9592.087 - 9651.665: 44.1366% ( 272) 00:12:00.110 9651.665 - 9711.244: 46.2709% ( 265) 00:12:00.110 9711.244 - 9770.822: 48.3811% ( 262) 00:12:00.110 9770.822 - 9830.400: 50.5074% ( 264) 00:12:00.110 9830.400 - 9889.978: 52.5773% ( 257) 00:12:00.110 9889.978 - 9949.556: 54.6956% ( 263) 00:12:00.110 9949.556 - 10009.135: 56.7252% ( 252) 00:12:00.110 10009.135 - 10068.713: 58.6501% ( 239) 00:12:00.110 10068.713 - 10128.291: 60.4784% ( 227) 00:12:00.110 10128.291 - 10187.869: 62.1778% ( 211) 00:12:00.110 10187.869 - 10247.447: 63.8531% ( 208) 00:12:00.110 10247.447 - 10307.025: 65.5364% ( 209) 00:12:00.110 10307.025 - 10366.604: 67.2439% ( 212) 00:12:00.110 10366.604 - 10426.182: 68.8628% ( 201) 00:12:00.110 10426.182 - 10485.760: 70.4494% ( 197) 00:12:00.110 10485.760 - 10545.338: 71.9878% ( 191) 00:12:00.110 10545.338 - 10604.916: 73.5583% ( 195) 00:12:00.110 10604.916 - 10664.495: 75.0805% ( 189) 00:12:00.110 10664.495 - 10724.073: 76.5142% ( 178) 00:12:00.110 10724.073 - 10783.651: 77.8592% ( 167) 00:12:00.110 10783.651 - 10843.229: 79.0110% ( 143) 00:12:00.110 10843.229 - 10902.807: 80.1063% ( 136) 00:12:00.110 10902.807 - 10962.385: 81.1936% ( 135) 00:12:00.110 10962.385 - 11021.964: 82.2890% ( 136) 00:12:00.110 11021.964 - 11081.542: 83.3280% ( 129) 00:12:00.110 11081.542 - 11141.120: 84.3347% ( 125) 00:12:00.110 11141.120 - 11200.698: 85.3012% ( 120) 00:12:00.110 11200.698 - 11260.276: 86.2355% ( 116) 00:12:00.110 11260.276 - 11319.855: 87.1134% ( 109) 00:12:00.110 11319.855 - 11379.433: 87.8463% ( 91) 00:12:00.110 11379.433 - 11439.011: 88.4343% ( 73) 00:12:00.110 11439.011 - 11498.589: 88.9256% ( 61) 00:12:00.110 11498.589 - 11558.167: 89.4008% ( 59) 00:12:00.110 11558.167 - 11617.745: 89.8840% ( 60) 00:12:00.110 11617.745 - 11677.324: 90.3189% ( 54) 00:12:00.110 11677.324 - 11736.902: 90.7055% ( 48) 00:12:00.110 11736.902 - 11796.480: 91.0760% ( 46) 00:12:00.110 11796.480 - 11856.058: 91.4304% ( 44) 00:12:00.110 11856.058 - 11915.636: 91.7687% ( 42) 00:12:00.110 11915.636 - 11975.215: 92.1070% ( 42) 00:12:00.110 11975.215 - 12034.793: 92.4372% ( 41) 00:12:00.110 12034.793 - 12094.371: 92.7352% ( 37) 00:12:00.110 12094.371 - 12153.949: 93.0412% ( 38) 00:12:00.110 12153.949 - 12213.527: 93.3392% ( 37) 00:12:00.110 12213.527 - 12273.105: 93.6211% ( 35) 00:12:00.110 12273.105 - 12332.684: 93.9191% ( 37) 00:12:00.110 12332.684 - 12392.262: 94.1849% ( 33) 00:12:00.110 12392.262 - 12451.840: 94.4507% ( 33) 00:12:00.110 12451.840 - 12511.418: 94.7004% ( 31) 00:12:00.110 12511.418 - 12570.996: 94.9340% ( 29) 00:12:00.110 12570.996 - 12630.575: 95.1675% ( 29) 00:12:00.110 12630.575 - 12690.153: 95.3689% ( 25) 00:12:00.110 12690.153 - 12749.731: 95.5702% ( 25) 00:12:00.110 12749.731 - 12809.309: 95.7313% ( 20) 00:12:00.110 12809.309 - 12868.887: 95.8682% ( 17) 00:12:00.110 12868.887 - 12928.465: 96.0052% ( 17) 00:12:00.110 12928.465 - 12988.044: 96.1340% ( 16) 00:12:00.110 12988.044 - 13047.622: 96.2468% ( 14) 00:12:00.110 13047.622 - 13107.200: 96.3756% ( 16) 00:12:00.110 13107.200 - 13166.778: 96.4723% ( 12) 00:12:00.110 13166.778 - 13226.356: 96.5689% ( 12) 00:12:00.110 13226.356 - 13285.935: 96.6575% ( 11) 00:12:00.110 13285.935 - 13345.513: 96.7220% ( 8) 00:12:00.110 13345.513 - 13405.091: 96.7784% ( 7) 00:12:00.110 13405.091 - 13464.669: 96.8186% ( 5) 00:12:00.110 13464.669 - 13524.247: 96.8508% ( 4) 00:12:00.110 13524.247 - 13583.825: 96.8750% ( 3) 00:12:00.110 13583.825 - 13643.404: 96.8911% ( 2) 00:12:00.110 13643.404 - 13702.982: 96.9072% ( 2) 00:12:00.110 16681.891 - 16801.047: 96.9153% ( 1) 00:12:00.110 16801.047 - 16920.204: 96.9636% ( 6) 00:12:00.110 16920.204 - 17039.360: 97.0119% ( 6) 00:12:00.110 17039.360 - 17158.516: 97.0683% ( 7) 00:12:00.110 17158.516 - 17277.673: 97.1166% ( 6) 00:12:00.110 17277.673 - 17396.829: 97.1569% ( 5) 00:12:00.110 17396.829 - 17515.985: 97.2052% ( 6) 00:12:00.110 17515.985 - 17635.142: 97.2455% ( 5) 00:12:00.110 17635.142 - 17754.298: 97.3099% ( 8) 00:12:00.110 17754.298 - 17873.455: 97.4066% ( 12) 00:12:00.110 17873.455 - 17992.611: 97.4871% ( 10) 00:12:00.110 17992.611 - 18111.767: 97.5757% ( 11) 00:12:00.110 18111.767 - 18230.924: 97.6643% ( 11) 00:12:00.110 18230.924 - 18350.080: 97.7529% ( 11) 00:12:00.110 18350.080 - 18469.236: 97.8415% ( 11) 00:12:00.110 18469.236 - 18588.393: 97.9301% ( 11) 00:12:00.110 18588.393 - 18707.549: 98.0267% ( 12) 00:12:00.110 18707.549 - 18826.705: 98.1073% ( 10) 00:12:00.110 18826.705 - 18945.862: 98.1959% ( 11) 00:12:00.110 18945.862 - 19065.018: 98.2845% ( 11) 00:12:00.110 19065.018 - 19184.175: 98.3731% ( 11) 00:12:00.110 19184.175 - 19303.331: 98.4536% ( 10) 00:12:00.110 19303.331 - 19422.487: 98.4858% ( 4) 00:12:00.110 19422.487 - 19541.644: 98.5261% ( 5) 00:12:00.110 19541.644 - 19660.800: 98.5664% ( 5) 00:12:00.110 19660.800 - 19779.956: 98.5986% ( 4) 00:12:00.110 19779.956 - 19899.113: 98.6389% ( 5) 00:12:00.110 19899.113 - 20018.269: 98.6791% ( 5) 00:12:00.110 20018.269 - 20137.425: 98.7194% ( 5) 00:12:00.110 20137.425 - 20256.582: 98.7597% ( 5) 00:12:00.110 20256.582 - 20375.738: 98.7838% ( 3) 00:12:00.110 20375.738 - 20494.895: 98.8241% ( 5) 00:12:00.110 20494.895 - 20614.051: 98.8563% ( 4) 00:12:00.110 20614.051 - 20733.207: 98.8966% ( 5) 00:12:00.110 20733.207 - 20852.364: 98.9369% ( 5) 00:12:00.110 20852.364 - 20971.520: 98.9691% ( 4) 00:12:00.110 36461.847 - 36700.160: 98.9852% ( 2) 00:12:00.110 36700.160 - 36938.473: 99.0416% ( 7) 00:12:00.110 36938.473 - 37176.785: 99.0818% ( 5) 00:12:00.110 37176.785 - 37415.098: 99.1382% ( 7) 00:12:00.110 37415.098 - 37653.411: 99.1865% ( 6) 00:12:00.110 37653.411 - 37891.724: 99.2429% ( 7) 00:12:00.110 37891.724 - 38130.036: 99.2832% ( 5) 00:12:00.110 38130.036 - 38368.349: 99.3396% ( 7) 00:12:00.110 38368.349 - 38606.662: 99.3879% ( 6) 00:12:00.110 38606.662 - 38844.975: 99.4362% ( 6) 00:12:00.110 38844.975 - 39083.287: 99.4765% ( 5) 00:12:00.110 39083.287 - 39321.600: 99.5329% ( 7) 00:12:00.110 39321.600 - 39559.913: 99.5731% ( 5) 00:12:00.110 39559.913 - 39798.225: 99.6295% ( 7) 00:12:00.110 39798.225 - 40036.538: 99.6778% ( 6) 00:12:00.110 40036.538 - 40274.851: 99.7342% ( 7) 00:12:00.110 40274.851 - 40513.164: 99.7825% ( 6) 00:12:00.110 40513.164 - 40751.476: 99.8309% ( 6) 00:12:00.110 40751.476 - 40989.789: 99.8872% ( 7) 00:12:00.110 40989.789 - 41228.102: 99.9356% ( 6) 00:12:00.110 41228.102 - 41466.415: 99.9919% ( 7) 00:12:00.110 41466.415 - 41704.727: 100.0000% ( 1) 00:12:00.110 00:12:00.110 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:12:00.110 ============================================================================== 00:12:00.110 Range in us Cumulative IO count 00:12:00.110 7357.905 - 7387.695: 0.0403% ( 5) 00:12:00.110 7387.695 - 7417.484: 0.0644% ( 3) 00:12:00.110 7417.484 - 7447.273: 0.1208% ( 7) 00:12:00.110 7447.273 - 7477.062: 0.1691% ( 6) 00:12:00.110 7477.062 - 7506.851: 0.2175% ( 6) 00:12:00.110 7506.851 - 7536.640: 0.2819% ( 8) 00:12:00.110 7536.640 - 7566.429: 0.3544% ( 9) 00:12:00.110 7566.429 - 7596.218: 0.4269% ( 9) 00:12:00.110 7596.218 - 7626.007: 0.5638% ( 17) 00:12:00.110 7626.007 - 7685.585: 0.9021% ( 42) 00:12:00.110 7685.585 - 7745.164: 1.4095% ( 63) 00:12:00.110 7745.164 - 7804.742: 1.9572% ( 68) 00:12:00.110 7804.742 - 7864.320: 2.5370% ( 72) 00:12:00.110 7864.320 - 7923.898: 3.2216% ( 85) 00:12:00.110 7923.898 - 7983.476: 3.9304% ( 88) 00:12:00.110 7983.476 - 8043.055: 4.7036% ( 96) 00:12:00.110 8043.055 - 8102.633: 5.4688% ( 95) 00:12:00.110 8102.633 - 8162.211: 6.2339% ( 95) 00:12:00.110 8162.211 - 8221.789: 7.0715% ( 104) 00:12:00.110 8221.789 - 8281.367: 7.9655% ( 111) 00:12:00.110 8281.367 - 8340.945: 8.8515% ( 110) 00:12:00.110 8340.945 - 8400.524: 9.7455% ( 111) 00:12:00.110 8400.524 - 8460.102: 10.7442% ( 124) 00:12:00.110 8460.102 - 8519.680: 11.8476% ( 137) 00:12:00.110 8519.680 - 8579.258: 12.8785% ( 128) 00:12:00.110 8579.258 - 8638.836: 14.0544% ( 146) 00:12:00.110 8638.836 - 8698.415: 15.2465% ( 148) 00:12:00.110 8698.415 - 8757.993: 16.5110% ( 157) 00:12:00.110 8757.993 - 8817.571: 17.8077% ( 161) 00:12:00.110 8817.571 - 8877.149: 19.1205% ( 163) 00:12:00.110 8877.149 - 8936.727: 20.6186% ( 186) 00:12:00.110 8936.727 - 8996.305: 22.1730% ( 193) 00:12:00.110 8996.305 - 9055.884: 23.8644% ( 210) 00:12:00.110 9055.884 - 9115.462: 25.5960% ( 215) 00:12:00.110 9115.462 - 9175.040: 27.4404% ( 229) 00:12:00.110 9175.040 - 9234.618: 29.3895% ( 242) 00:12:00.111 9234.618 - 9294.196: 31.3466% ( 243) 00:12:00.111 9294.196 - 9353.775: 33.3360% ( 247) 00:12:00.111 9353.775 - 9413.353: 35.3898% ( 255) 00:12:00.111 9413.353 - 9472.931: 37.4919% ( 261) 00:12:00.111 9472.931 - 9532.509: 39.6988% ( 274) 00:12:00.111 9532.509 - 9592.087: 41.8573% ( 268) 00:12:00.111 9592.087 - 9651.665: 44.0963% ( 278) 00:12:00.111 9651.665 - 9711.244: 46.3354% ( 278) 00:12:00.111 9711.244 - 9770.822: 48.5100% ( 270) 00:12:00.111 9770.822 - 9830.400: 50.7732% ( 281) 00:12:00.111 9830.400 - 9889.978: 52.8270% ( 255) 00:12:00.111 9889.978 - 9949.556: 54.9291% ( 261) 00:12:00.111 9949.556 - 10009.135: 56.9346% ( 249) 00:12:00.111 10009.135 - 10068.713: 58.8918% ( 243) 00:12:00.111 10068.713 - 10128.291: 60.7120% ( 226) 00:12:00.111 10128.291 - 10187.869: 62.4114% ( 211) 00:12:00.111 10187.869 - 10247.447: 64.0867% ( 208) 00:12:00.111 10247.447 - 10307.025: 65.7216% ( 203) 00:12:00.111 10307.025 - 10366.604: 67.3647% ( 204) 00:12:00.111 10366.604 - 10426.182: 68.9594% ( 198) 00:12:00.111 10426.182 - 10485.760: 70.5058% ( 192) 00:12:00.111 10485.760 - 10545.338: 72.0441% ( 191) 00:12:00.111 10545.338 - 10604.916: 73.5825% ( 191) 00:12:00.111 10604.916 - 10664.495: 75.0403% ( 181) 00:12:00.111 10664.495 - 10724.073: 76.3934% ( 168) 00:12:00.111 10724.073 - 10783.651: 77.7948% ( 174) 00:12:00.111 10783.651 - 10843.229: 79.0512% ( 156) 00:12:00.111 10843.229 - 10902.807: 80.2191% ( 145) 00:12:00.111 10902.807 - 10962.385: 81.3305% ( 138) 00:12:00.111 10962.385 - 11021.964: 82.4420% ( 138) 00:12:00.111 11021.964 - 11081.542: 83.5615% ( 139) 00:12:00.111 11081.542 - 11141.120: 84.4716% ( 113) 00:12:00.111 11141.120 - 11200.698: 85.3495% ( 109) 00:12:00.111 11200.698 - 11260.276: 86.0905% ( 92) 00:12:00.111 11260.276 - 11319.855: 86.7590% ( 83) 00:12:00.111 11319.855 - 11379.433: 87.3389% ( 72) 00:12:00.111 11379.433 - 11439.011: 87.9188% ( 72) 00:12:00.111 11439.011 - 11498.589: 88.4262% ( 63) 00:12:00.111 11498.589 - 11558.167: 88.8934% ( 58) 00:12:00.111 11558.167 - 11617.745: 89.3605% ( 58) 00:12:00.111 11617.745 - 11677.324: 89.8276% ( 58) 00:12:00.111 11677.324 - 11736.902: 90.2867% ( 57) 00:12:00.111 11736.902 - 11796.480: 90.6653% ( 47) 00:12:00.111 11796.480 - 11856.058: 91.0116% ( 43) 00:12:00.111 11856.058 - 11915.636: 91.3096% ( 37) 00:12:00.111 11915.636 - 11975.215: 91.6237% ( 39) 00:12:00.111 11975.215 - 12034.793: 91.9620% ( 42) 00:12:00.111 12034.793 - 12094.371: 92.2761% ( 39) 00:12:00.111 12094.371 - 12153.949: 92.6063% ( 41) 00:12:00.111 12153.949 - 12213.527: 92.9204% ( 39) 00:12:00.111 12213.527 - 12273.105: 93.2023% ( 35) 00:12:00.111 12273.105 - 12332.684: 93.5245% ( 40) 00:12:00.111 12332.684 - 12392.262: 93.8466% ( 40) 00:12:00.111 12392.262 - 12451.840: 94.1447% ( 37) 00:12:00.111 12451.840 - 12511.418: 94.3863% ( 30) 00:12:00.111 12511.418 - 12570.996: 94.6601% ( 34) 00:12:00.111 12570.996 - 12630.575: 94.8534% ( 24) 00:12:00.111 12630.575 - 12690.153: 95.0145% ( 20) 00:12:00.111 12690.153 - 12749.731: 95.1917% ( 22) 00:12:00.111 12749.731 - 12809.309: 95.3286% ( 17) 00:12:00.111 12809.309 - 12868.887: 95.4414% ( 14) 00:12:00.111 12868.887 - 12928.465: 95.5622% ( 15) 00:12:00.111 12928.465 - 12988.044: 95.6669% ( 13) 00:12:00.111 12988.044 - 13047.622: 95.7796% ( 14) 00:12:00.111 13047.622 - 13107.200: 95.8843% ( 13) 00:12:00.111 13107.200 - 13166.778: 95.9971% ( 14) 00:12:00.111 13166.778 - 13226.356: 96.1099% ( 14) 00:12:00.111 13226.356 - 13285.935: 96.2307% ( 15) 00:12:00.111 13285.935 - 13345.513: 96.3434% ( 14) 00:12:00.111 13345.513 - 13405.091: 96.4240% ( 10) 00:12:00.111 13405.091 - 13464.669: 96.5126% ( 11) 00:12:00.111 13464.669 - 13524.247: 96.5770% ( 8) 00:12:00.111 13524.247 - 13583.825: 96.6495% ( 9) 00:12:00.111 13583.825 - 13643.404: 96.7300% ( 10) 00:12:00.111 13643.404 - 13702.982: 96.7864% ( 7) 00:12:00.111 13702.982 - 13762.560: 96.8186% ( 4) 00:12:00.111 13762.560 - 13822.138: 96.8508% ( 4) 00:12:00.111 13822.138 - 13881.716: 96.8750% ( 3) 00:12:00.111 13881.716 - 13941.295: 96.8992% ( 3) 00:12:00.111 13941.295 - 14000.873: 96.9072% ( 1) 00:12:00.111 15490.327 - 15609.484: 96.9475% ( 5) 00:12:00.111 15609.484 - 15728.640: 96.9878% ( 5) 00:12:00.111 15728.640 - 15847.796: 97.0361% ( 6) 00:12:00.111 15847.796 - 15966.953: 97.0844% ( 6) 00:12:00.111 15966.953 - 16086.109: 97.1327% ( 6) 00:12:00.111 16086.109 - 16205.265: 97.1891% ( 7) 00:12:00.111 16205.265 - 16324.422: 97.2374% ( 6) 00:12:00.111 16324.422 - 16443.578: 97.2858% ( 6) 00:12:00.111 16443.578 - 16562.735: 97.3260% ( 5) 00:12:00.111 16562.735 - 16681.891: 97.3824% ( 7) 00:12:00.111 16681.891 - 16801.047: 97.4307% ( 6) 00:12:00.111 16801.047 - 16920.204: 97.4871% ( 7) 00:12:00.111 16920.204 - 17039.360: 97.5354% ( 6) 00:12:00.111 17039.360 - 17158.516: 97.5838% ( 6) 00:12:00.111 17158.516 - 17277.673: 97.6321% ( 6) 00:12:00.111 17277.673 - 17396.829: 97.6804% ( 6) 00:12:00.111 17396.829 - 17515.985: 97.7287% ( 6) 00:12:00.111 17515.985 - 17635.142: 97.7771% ( 6) 00:12:00.111 17635.142 - 17754.298: 97.8334% ( 7) 00:12:00.111 17754.298 - 17873.455: 97.8818% ( 6) 00:12:00.111 17873.455 - 17992.611: 97.9301% ( 6) 00:12:00.111 17992.611 - 18111.767: 97.9381% ( 1) 00:12:00.111 18588.393 - 18707.549: 97.9784% ( 5) 00:12:00.111 18707.549 - 18826.705: 98.0187% ( 5) 00:12:00.111 18826.705 - 18945.862: 98.0590% ( 5) 00:12:00.111 18945.862 - 19065.018: 98.0992% ( 5) 00:12:00.111 19065.018 - 19184.175: 98.1314% ( 4) 00:12:00.111 19184.175 - 19303.331: 98.1717% ( 5) 00:12:00.111 19303.331 - 19422.487: 98.2120% ( 5) 00:12:00.111 19422.487 - 19541.644: 98.2523% ( 5) 00:12:00.111 19541.644 - 19660.800: 98.2925% ( 5) 00:12:00.111 19660.800 - 19779.956: 98.3247% ( 4) 00:12:00.111 19779.956 - 19899.113: 98.3650% ( 5) 00:12:00.111 19899.113 - 20018.269: 98.3972% ( 4) 00:12:00.111 20018.269 - 20137.425: 98.4375% ( 5) 00:12:00.111 20137.425 - 20256.582: 98.4778% ( 5) 00:12:00.111 20256.582 - 20375.738: 98.5180% ( 5) 00:12:00.111 20375.738 - 20494.895: 98.5583% ( 5) 00:12:00.111 20494.895 - 20614.051: 98.5986% ( 5) 00:12:00.111 20614.051 - 20733.207: 98.6389% ( 5) 00:12:00.111 20733.207 - 20852.364: 98.6791% ( 5) 00:12:00.111 20852.364 - 20971.520: 98.7194% ( 5) 00:12:00.111 20971.520 - 21090.676: 98.7516% ( 4) 00:12:00.111 21090.676 - 21209.833: 98.7919% ( 5) 00:12:00.111 21209.833 - 21328.989: 98.8322% ( 5) 00:12:00.111 21328.989 - 21448.145: 98.8644% ( 4) 00:12:00.111 21448.145 - 21567.302: 98.9046% ( 5) 00:12:00.111 21567.302 - 21686.458: 98.9449% ( 5) 00:12:00.111 21686.458 - 21805.615: 98.9691% ( 3) 00:12:00.111 34078.720 - 34317.033: 98.9771% ( 1) 00:12:00.111 34317.033 - 34555.345: 99.0335% ( 7) 00:12:00.111 34555.345 - 34793.658: 99.0818% ( 6) 00:12:00.111 34793.658 - 35031.971: 99.1302% ( 6) 00:12:00.111 35031.971 - 35270.284: 99.1785% ( 6) 00:12:00.111 35270.284 - 35508.596: 99.2349% ( 7) 00:12:00.111 35508.596 - 35746.909: 99.2832% ( 6) 00:12:00.111 35746.909 - 35985.222: 99.3315% ( 6) 00:12:00.111 35985.222 - 36223.535: 99.3879% ( 7) 00:12:00.111 36223.535 - 36461.847: 99.4362% ( 6) 00:12:00.111 36461.847 - 36700.160: 99.4845% ( 6) 00:12:00.111 36700.160 - 36938.473: 99.5329% ( 6) 00:12:00.111 36938.473 - 37176.785: 99.5892% ( 7) 00:12:00.111 37176.785 - 37415.098: 99.6376% ( 6) 00:12:00.111 37415.098 - 37653.411: 99.6778% ( 5) 00:12:00.111 37653.411 - 37891.724: 99.7262% ( 6) 00:12:00.111 37891.724 - 38130.036: 99.7745% ( 6) 00:12:00.111 38130.036 - 38368.349: 99.8228% ( 6) 00:12:00.111 38368.349 - 38606.662: 99.8792% ( 7) 00:12:00.111 38606.662 - 38844.975: 99.9275% ( 6) 00:12:00.111 38844.975 - 39083.287: 99.9758% ( 6) 00:12:00.111 39083.287 - 39321.600: 100.0000% ( 3) 00:12:00.111 00:12:00.111 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:12:00.111 ============================================================================== 00:12:00.111 Range in us Cumulative IO count 00:12:00.111 7328.116 - 7357.905: 0.0242% ( 3) 00:12:00.111 7357.905 - 7387.695: 0.0483% ( 3) 00:12:00.111 7387.695 - 7417.484: 0.1047% ( 7) 00:12:00.111 7417.484 - 7447.273: 0.1450% ( 5) 00:12:00.111 7447.273 - 7477.062: 0.2094% ( 8) 00:12:00.111 7477.062 - 7506.851: 0.2980% ( 11) 00:12:00.111 7506.851 - 7536.640: 0.3705% ( 9) 00:12:00.111 7536.640 - 7566.429: 0.4752% ( 13) 00:12:00.111 7566.429 - 7596.218: 0.6041% ( 16) 00:12:00.111 7596.218 - 7626.007: 0.7490% ( 18) 00:12:00.111 7626.007 - 7685.585: 1.0954% ( 43) 00:12:00.111 7685.585 - 7745.164: 1.5786% ( 60) 00:12:00.111 7745.164 - 7804.742: 2.1343% ( 69) 00:12:00.111 7804.742 - 7864.320: 2.7706% ( 79) 00:12:00.111 7864.320 - 7923.898: 3.3988% ( 78) 00:12:00.111 7923.898 - 7983.476: 4.1318% ( 91) 00:12:00.111 7983.476 - 8043.055: 4.8325% ( 87) 00:12:00.111 8043.055 - 8102.633: 5.6137% ( 97) 00:12:00.111 8102.633 - 8162.211: 6.4111% ( 99) 00:12:00.111 8162.211 - 8221.789: 7.1601% ( 93) 00:12:00.111 8221.789 - 8281.367: 7.9977% ( 104) 00:12:00.111 8281.367 - 8340.945: 8.8676% ( 108) 00:12:00.111 8340.945 - 8400.524: 9.7374% ( 108) 00:12:00.111 8400.524 - 8460.102: 10.6073% ( 108) 00:12:00.111 8460.102 - 8519.680: 11.6060% ( 124) 00:12:00.111 8519.680 - 8579.258: 12.6208% ( 126) 00:12:00.111 8579.258 - 8638.836: 13.6517% ( 128) 00:12:00.111 8638.836 - 8698.415: 14.8599% ( 150) 00:12:00.111 8698.415 - 8757.993: 16.1727% ( 163) 00:12:00.111 8757.993 - 8817.571: 17.5741% ( 174) 00:12:00.111 8817.571 - 8877.149: 19.0561% ( 184) 00:12:00.111 8877.149 - 8936.727: 20.6347% ( 196) 00:12:00.111 8936.727 - 8996.305: 22.1730% ( 191) 00:12:00.111 8996.305 - 9055.884: 23.7436% ( 195) 00:12:00.112 9055.884 - 9115.462: 25.4027% ( 206) 00:12:00.112 9115.462 - 9175.040: 27.1827% ( 221) 00:12:00.112 9175.040 - 9234.618: 28.9143% ( 215) 00:12:00.112 9234.618 - 9294.196: 30.8231% ( 237) 00:12:00.112 9294.196 - 9353.775: 32.8286% ( 249) 00:12:00.112 9353.775 - 9413.353: 34.8502% ( 251) 00:12:00.112 9413.353 - 9472.931: 36.9604% ( 262) 00:12:00.112 9472.931 - 9532.509: 39.1511% ( 272) 00:12:00.112 9532.509 - 9592.087: 41.3257% ( 270) 00:12:00.112 9592.087 - 9651.665: 43.4923% ( 269) 00:12:00.112 9651.665 - 9711.244: 45.5863% ( 260) 00:12:00.112 9711.244 - 9770.822: 47.6401% ( 255) 00:12:00.112 9770.822 - 9830.400: 49.7503% ( 262) 00:12:00.112 9830.400 - 9889.978: 51.8605% ( 262) 00:12:00.112 9889.978 - 9949.556: 53.8660% ( 249) 00:12:00.112 9949.556 - 10009.135: 55.8070% ( 241) 00:12:00.112 10009.135 - 10068.713: 57.6997% ( 235) 00:12:00.112 10068.713 - 10128.291: 59.4394% ( 216) 00:12:00.112 10128.291 - 10187.869: 61.1791% ( 216) 00:12:00.112 10187.869 - 10247.447: 62.9349% ( 218) 00:12:00.112 10247.447 - 10307.025: 64.5941% ( 206) 00:12:00.112 10307.025 - 10366.604: 66.2854% ( 210) 00:12:00.112 10366.604 - 10426.182: 67.8882% ( 199) 00:12:00.112 10426.182 - 10485.760: 69.4749% ( 197) 00:12:00.112 10485.760 - 10545.338: 71.0213% ( 192) 00:12:00.112 10545.338 - 10604.916: 72.5193% ( 186) 00:12:00.112 10604.916 - 10664.495: 73.9449% ( 177) 00:12:00.112 10664.495 - 10724.073: 75.3383% ( 173) 00:12:00.112 10724.073 - 10783.651: 76.7316% ( 173) 00:12:00.112 10783.651 - 10843.229: 78.0525% ( 164) 00:12:00.112 10843.229 - 10902.807: 79.2767% ( 152) 00:12:00.112 10902.807 - 10962.385: 80.4607% ( 147) 00:12:00.112 10962.385 - 11021.964: 81.5399% ( 134) 00:12:00.112 11021.964 - 11081.542: 82.6353% ( 136) 00:12:00.112 11081.542 - 11141.120: 83.5938% ( 119) 00:12:00.112 11141.120 - 11200.698: 84.4878% ( 111) 00:12:00.112 11200.698 - 11260.276: 85.1965% ( 88) 00:12:00.112 11260.276 - 11319.855: 85.8731% ( 84) 00:12:00.112 11319.855 - 11379.433: 86.4852% ( 76) 00:12:00.112 11379.433 - 11439.011: 87.1053% ( 77) 00:12:00.112 11439.011 - 11498.589: 87.6128% ( 63) 00:12:00.112 11498.589 - 11558.167: 88.0638% ( 56) 00:12:00.112 11558.167 - 11617.745: 88.4423% ( 47) 00:12:00.112 11617.745 - 11677.324: 88.8128% ( 46) 00:12:00.112 11677.324 - 11736.902: 89.1511% ( 42) 00:12:00.112 11736.902 - 11796.480: 89.4813% ( 41) 00:12:00.112 11796.480 - 11856.058: 89.8035% ( 40) 00:12:00.112 11856.058 - 11915.636: 90.1095% ( 38) 00:12:00.112 11915.636 - 11975.215: 90.3914% ( 35) 00:12:00.112 11975.215 - 12034.793: 90.6653% ( 34) 00:12:00.112 12034.793 - 12094.371: 90.9472% ( 35) 00:12:00.112 12094.371 - 12153.949: 91.2452% ( 37) 00:12:00.112 12153.949 - 12213.527: 91.5271% ( 35) 00:12:00.112 12213.527 - 12273.105: 91.7848% ( 32) 00:12:00.112 12273.105 - 12332.684: 92.0425% ( 32) 00:12:00.112 12332.684 - 12392.262: 92.3244% ( 35) 00:12:00.112 12392.262 - 12451.840: 92.5822% ( 32) 00:12:00.112 12451.840 - 12511.418: 92.8238% ( 30) 00:12:00.112 12511.418 - 12570.996: 93.0815% ( 32) 00:12:00.112 12570.996 - 12630.575: 93.2748% ( 24) 00:12:00.112 12630.575 - 12690.153: 93.5084% ( 29) 00:12:00.112 12690.153 - 12749.731: 93.6775% ( 21) 00:12:00.112 12749.731 - 12809.309: 93.8144% ( 17) 00:12:00.112 12809.309 - 12868.887: 93.9836% ( 21) 00:12:00.112 12868.887 - 12928.465: 94.1124% ( 16) 00:12:00.112 12928.465 - 12988.044: 94.2091% ( 12) 00:12:00.112 12988.044 - 13047.622: 94.3138% ( 13) 00:12:00.112 13047.622 - 13107.200: 94.4104% ( 12) 00:12:00.112 13107.200 - 13166.778: 94.5151% ( 13) 00:12:00.112 13166.778 - 13226.356: 94.6118% ( 12) 00:12:00.112 13226.356 - 13285.935: 94.7084% ( 12) 00:12:00.112 13285.935 - 13345.513: 94.7890% ( 10) 00:12:00.112 13345.513 - 13405.091: 94.8776% ( 11) 00:12:00.112 13405.091 - 13464.669: 94.9823% ( 13) 00:12:00.112 13464.669 - 13524.247: 95.0789% ( 12) 00:12:00.112 13524.247 - 13583.825: 95.1595% ( 10) 00:12:00.112 13583.825 - 13643.404: 95.2561% ( 12) 00:12:00.112 13643.404 - 13702.982: 95.3367% ( 10) 00:12:00.112 13702.982 - 13762.560: 95.4011% ( 8) 00:12:00.112 13762.560 - 13822.138: 95.4816% ( 10) 00:12:00.112 13822.138 - 13881.716: 95.5702% ( 11) 00:12:00.112 13881.716 - 13941.295: 95.6508% ( 10) 00:12:00.112 13941.295 - 14000.873: 95.7474% ( 12) 00:12:00.112 14000.873 - 14060.451: 95.8360% ( 11) 00:12:00.112 14060.451 - 14120.029: 95.9246% ( 11) 00:12:00.112 14120.029 - 14179.607: 96.0052% ( 10) 00:12:00.112 14179.607 - 14239.185: 96.1179% ( 14) 00:12:00.112 14239.185 - 14298.764: 96.1985% ( 10) 00:12:00.112 14298.764 - 14358.342: 96.2790% ( 10) 00:12:00.112 14358.342 - 14417.920: 96.3595% ( 10) 00:12:00.112 14417.920 - 14477.498: 96.4320% ( 9) 00:12:00.112 14477.498 - 14537.076: 96.5126% ( 10) 00:12:00.112 14537.076 - 14596.655: 96.5851% ( 9) 00:12:00.112 14596.655 - 14656.233: 96.6495% ( 8) 00:12:00.112 14656.233 - 14715.811: 96.7139% ( 8) 00:12:00.112 14715.811 - 14775.389: 96.7945% ( 10) 00:12:00.112 14775.389 - 14834.967: 96.8669% ( 9) 00:12:00.112 14834.967 - 14894.545: 96.9475% ( 10) 00:12:00.112 14894.545 - 14954.124: 97.0119% ( 8) 00:12:00.112 14954.124 - 15013.702: 97.0925% ( 10) 00:12:00.112 15013.702 - 15073.280: 97.1649% ( 9) 00:12:00.112 15073.280 - 15132.858: 97.2455% ( 10) 00:12:00.112 15132.858 - 15192.436: 97.3099% ( 8) 00:12:00.112 15192.436 - 15252.015: 97.3824% ( 9) 00:12:00.112 15252.015 - 15371.171: 97.5354% ( 19) 00:12:00.112 15371.171 - 15490.327: 97.6885% ( 19) 00:12:00.112 15490.327 - 15609.484: 97.8576% ( 21) 00:12:00.112 15609.484 - 15728.640: 98.0106% ( 19) 00:12:00.112 15728.640 - 15847.796: 98.1073% ( 12) 00:12:00.112 15847.796 - 15966.953: 98.2200% ( 14) 00:12:00.112 15966.953 - 16086.109: 98.3247% ( 13) 00:12:00.112 16086.109 - 16205.265: 98.3731% ( 6) 00:12:00.112 16205.265 - 16324.422: 98.4294% ( 7) 00:12:00.112 16324.422 - 16443.578: 98.4858% ( 7) 00:12:00.112 16443.578 - 16562.735: 98.5261% ( 5) 00:12:00.112 16562.735 - 16681.891: 98.5825% ( 7) 00:12:00.112 16681.891 - 16801.047: 98.6389% ( 7) 00:12:00.112 16801.047 - 16920.204: 98.6872% ( 6) 00:12:00.112 16920.204 - 17039.360: 98.7436% ( 7) 00:12:00.112 17039.360 - 17158.516: 98.7999% ( 7) 00:12:00.112 17158.516 - 17277.673: 98.8483% ( 6) 00:12:00.112 17277.673 - 17396.829: 98.9046% ( 7) 00:12:00.112 17396.829 - 17515.985: 98.9530% ( 6) 00:12:00.112 17515.985 - 17635.142: 98.9691% ( 2) 00:12:00.112 31933.905 - 32172.218: 98.9932% ( 3) 00:12:00.112 32172.218 - 32410.531: 99.0416% ( 6) 00:12:00.112 32410.531 - 32648.844: 99.0899% ( 6) 00:12:00.112 32648.844 - 32887.156: 99.1382% ( 6) 00:12:00.112 32887.156 - 33125.469: 99.1865% ( 6) 00:12:00.112 33125.469 - 33363.782: 99.2429% ( 7) 00:12:00.112 33363.782 - 33602.095: 99.2912% ( 6) 00:12:00.112 33602.095 - 33840.407: 99.3396% ( 6) 00:12:00.112 33840.407 - 34078.720: 99.3879% ( 6) 00:12:00.112 34078.720 - 34317.033: 99.4362% ( 6) 00:12:00.112 34317.033 - 34555.345: 99.4845% ( 6) 00:12:00.112 34555.345 - 34793.658: 99.5409% ( 7) 00:12:00.112 34793.658 - 35031.971: 99.5892% ( 6) 00:12:00.112 35031.971 - 35270.284: 99.6376% ( 6) 00:12:00.112 35270.284 - 35508.596: 99.6778% ( 5) 00:12:00.112 35508.596 - 35746.909: 99.7342% ( 7) 00:12:00.112 35746.909 - 35985.222: 99.7906% ( 7) 00:12:00.112 35985.222 - 36223.535: 99.8389% ( 6) 00:12:00.112 36223.535 - 36461.847: 99.8872% ( 6) 00:12:00.112 36461.847 - 36700.160: 99.9436% ( 7) 00:12:00.112 36700.160 - 36938.473: 99.9919% ( 6) 00:12:00.112 36938.473 - 37176.785: 100.0000% ( 1) 00:12:00.112 00:12:00.112 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:12:00.112 ============================================================================== 00:12:00.112 Range in us Cumulative IO count 00:12:00.112 7268.538 - 7298.327: 0.0239% ( 3) 00:12:00.112 7298.327 - 7328.116: 0.0319% ( 1) 00:12:00.112 7328.116 - 7357.905: 0.0877% ( 7) 00:12:00.112 7357.905 - 7387.695: 0.0957% ( 1) 00:12:00.112 7387.695 - 7417.484: 0.1435% ( 6) 00:12:00.112 7417.484 - 7447.273: 0.1913% ( 6) 00:12:00.112 7447.273 - 7477.062: 0.2471% ( 7) 00:12:00.112 7477.062 - 7506.851: 0.3189% ( 9) 00:12:00.112 7506.851 - 7536.640: 0.4225% ( 13) 00:12:00.112 7536.640 - 7566.429: 0.5501% ( 16) 00:12:00.112 7566.429 - 7596.218: 0.6936% ( 18) 00:12:00.112 7596.218 - 7626.007: 0.8371% ( 18) 00:12:00.112 7626.007 - 7685.585: 1.2038% ( 46) 00:12:00.112 7685.585 - 7745.164: 1.6183% ( 52) 00:12:00.113 7745.164 - 7804.742: 2.0966% ( 60) 00:12:00.113 7804.742 - 7864.320: 2.6307% ( 67) 00:12:00.113 7864.320 - 7923.898: 3.2207% ( 74) 00:12:00.113 7923.898 - 7983.476: 3.9541% ( 92) 00:12:00.113 7983.476 - 8043.055: 4.7034% ( 94) 00:12:00.113 8043.055 - 8102.633: 5.4847% ( 98) 00:12:00.113 8102.633 - 8162.211: 6.3217% ( 105) 00:12:00.113 8162.211 - 8221.789: 7.1668% ( 106) 00:12:00.113 8221.789 - 8281.367: 8.0198% ( 107) 00:12:00.113 8281.367 - 8340.945: 8.9365% ( 115) 00:12:00.113 8340.945 - 8400.524: 9.8613% ( 116) 00:12:00.113 8400.524 - 8460.102: 10.8578% ( 125) 00:12:00.113 8460.102 - 8519.680: 11.8782% ( 128) 00:12:00.113 8519.680 - 8579.258: 12.8348% ( 120) 00:12:00.113 8579.258 - 8638.836: 13.9349% ( 138) 00:12:00.113 8638.836 - 8698.415: 15.1387% ( 151) 00:12:00.113 8698.415 - 8757.993: 16.3903% ( 157) 00:12:00.113 8757.993 - 8817.571: 17.7455% ( 170) 00:12:00.113 8817.571 - 8877.149: 19.2124% ( 184) 00:12:00.113 8877.149 - 8936.727: 20.7430% ( 192) 00:12:00.113 8936.727 - 8996.305: 22.3932% ( 207) 00:12:00.113 8996.305 - 9055.884: 24.1629% ( 222) 00:12:00.113 9055.884 - 9115.462: 25.9088% ( 219) 00:12:00.113 9115.462 - 9175.040: 27.7344% ( 229) 00:12:00.113 9175.040 - 9234.618: 29.5201% ( 224) 00:12:00.113 9234.618 - 9294.196: 31.4493% ( 242) 00:12:00.113 9294.196 - 9353.775: 33.4184% ( 247) 00:12:00.113 9353.775 - 9413.353: 35.2997% ( 236) 00:12:00.113 9413.353 - 9472.931: 37.3406% ( 256) 00:12:00.113 9472.931 - 9532.509: 39.4691% ( 267) 00:12:00.113 9532.509 - 9592.087: 41.5497% ( 261) 00:12:00.113 9592.087 - 9651.665: 43.6942% ( 269) 00:12:00.113 9651.665 - 9711.244: 45.8147% ( 266) 00:12:00.113 9711.244 - 9770.822: 47.9193% ( 264) 00:12:00.113 9770.822 - 9830.400: 50.0000% ( 261) 00:12:00.113 9830.400 - 9889.978: 51.9611% ( 246) 00:12:00.113 9889.978 - 9949.556: 54.0099% ( 257) 00:12:00.113 9949.556 - 10009.135: 55.9152% ( 239) 00:12:00.113 10009.135 - 10068.713: 57.7726% ( 233) 00:12:00.113 10068.713 - 10128.291: 59.6460% ( 235) 00:12:00.113 10128.291 - 10187.869: 61.4158% ( 222) 00:12:00.113 10187.869 - 10247.447: 63.1138% ( 213) 00:12:00.113 10247.447 - 10307.025: 64.7640% ( 207) 00:12:00.113 10307.025 - 10366.604: 66.3584% ( 200) 00:12:00.113 10366.604 - 10426.182: 67.9688% ( 202) 00:12:00.113 10426.182 - 10485.760: 69.5631% ( 200) 00:12:00.113 10485.760 - 10545.338: 71.1336% ( 197) 00:12:00.113 10545.338 - 10604.916: 72.6961% ( 196) 00:12:00.113 10604.916 - 10664.495: 74.0912% ( 175) 00:12:00.113 10664.495 - 10724.073: 75.5660% ( 185) 00:12:00.113 10724.073 - 10783.651: 76.7937% ( 154) 00:12:00.113 10783.651 - 10843.229: 78.0692% ( 160) 00:12:00.113 10843.229 - 10902.807: 79.3766% ( 164) 00:12:00.113 10902.807 - 10962.385: 80.5006% ( 141) 00:12:00.113 10962.385 - 11021.964: 81.5768% ( 135) 00:12:00.113 11021.964 - 11081.542: 82.6690% ( 137) 00:12:00.113 11081.542 - 11141.120: 83.6575% ( 124) 00:12:00.113 11141.120 - 11200.698: 84.4946% ( 105) 00:12:00.113 11200.698 - 11260.276: 85.2439% ( 94) 00:12:00.113 11260.276 - 11319.855: 85.8737% ( 79) 00:12:00.113 11319.855 - 11379.433: 86.5035% ( 79) 00:12:00.113 11379.433 - 11439.011: 87.0376% ( 67) 00:12:00.113 11439.011 - 11498.589: 87.4920% ( 57) 00:12:00.113 11498.589 - 11558.167: 87.9225% ( 54) 00:12:00.113 11558.167 - 11617.745: 88.3131% ( 49) 00:12:00.113 11617.745 - 11677.324: 88.6480% ( 42) 00:12:00.113 11677.324 - 11736.902: 88.9668% ( 40) 00:12:00.113 11736.902 - 11796.480: 89.3096% ( 43) 00:12:00.113 11796.480 - 11856.058: 89.6126% ( 38) 00:12:00.113 11856.058 - 11915.636: 89.9155% ( 38) 00:12:00.113 11915.636 - 11975.215: 90.2184% ( 38) 00:12:00.113 11975.215 - 12034.793: 90.5214% ( 38) 00:12:00.113 12034.793 - 12094.371: 90.8163% ( 37) 00:12:00.113 12094.371 - 12153.949: 91.0874% ( 34) 00:12:00.113 12153.949 - 12213.527: 91.3345% ( 31) 00:12:00.113 12213.527 - 12273.105: 91.6055% ( 34) 00:12:00.113 12273.105 - 12332.684: 91.8367% ( 29) 00:12:00.113 12332.684 - 12392.262: 92.0679% ( 29) 00:12:00.113 12392.262 - 12451.840: 92.3071% ( 30) 00:12:00.113 12451.840 - 12511.418: 92.5303% ( 28) 00:12:00.113 12511.418 - 12570.996: 92.7695% ( 30) 00:12:00.113 12570.996 - 12630.575: 92.9767% ( 26) 00:12:00.113 12630.575 - 12690.153: 93.1521% ( 22) 00:12:00.113 12690.153 - 12749.731: 93.3195% ( 21) 00:12:00.113 12749.731 - 12809.309: 93.5108% ( 24) 00:12:00.113 12809.309 - 12868.887: 93.6783% ( 21) 00:12:00.113 12868.887 - 12928.465: 93.8536% ( 22) 00:12:00.113 12928.465 - 12988.044: 94.0290% ( 22) 00:12:00.113 12988.044 - 13047.622: 94.2044% ( 22) 00:12:00.113 13047.622 - 13107.200: 94.3798% ( 22) 00:12:00.113 13107.200 - 13166.778: 94.5233% ( 18) 00:12:00.113 13166.778 - 13226.356: 94.6429% ( 15) 00:12:00.113 13226.356 - 13285.935: 94.7624% ( 15) 00:12:00.113 13285.935 - 13345.513: 94.8980% ( 17) 00:12:00.113 13345.513 - 13405.091: 95.0255% ( 16) 00:12:00.113 13405.091 - 13464.669: 95.1531% ( 16) 00:12:00.113 13464.669 - 13524.247: 95.2726% ( 15) 00:12:00.113 13524.247 - 13583.825: 95.3922% ( 15) 00:12:00.113 13583.825 - 13643.404: 95.4959% ( 13) 00:12:00.113 13643.404 - 13702.982: 95.5676% ( 9) 00:12:00.113 13702.982 - 13762.560: 95.6393% ( 9) 00:12:00.113 13762.560 - 13822.138: 95.6952% ( 7) 00:12:00.113 13822.138 - 13881.716: 95.7589% ( 8) 00:12:00.113 13881.716 - 13941.295: 95.8147% ( 7) 00:12:00.113 13941.295 - 14000.873: 95.8705% ( 7) 00:12:00.113 14000.873 - 14060.451: 95.9503% ( 10) 00:12:00.113 14060.451 - 14120.029: 96.0061% ( 7) 00:12:00.113 14120.029 - 14179.607: 96.0698% ( 8) 00:12:00.113 14179.607 - 14239.185: 96.1256% ( 7) 00:12:00.113 14239.185 - 14298.764: 96.2054% ( 10) 00:12:00.113 14298.764 - 14358.342: 96.2851% ( 10) 00:12:00.113 14358.342 - 14417.920: 96.3728% ( 11) 00:12:00.113 14417.920 - 14477.498: 96.4605% ( 11) 00:12:00.113 14477.498 - 14537.076: 96.5721% ( 14) 00:12:00.113 14537.076 - 14596.655: 96.6438% ( 9) 00:12:00.113 14596.655 - 14656.233: 96.7395% ( 12) 00:12:00.113 14656.233 - 14715.811: 96.8112% ( 9) 00:12:00.113 14715.811 - 14775.389: 96.8909% ( 10) 00:12:00.113 14775.389 - 14834.967: 96.9707% ( 10) 00:12:00.113 14834.967 - 14894.545: 97.0265% ( 7) 00:12:00.113 14894.545 - 14954.124: 97.0902% ( 8) 00:12:00.113 14954.124 - 15013.702: 97.1381% ( 6) 00:12:00.113 15013.702 - 15073.280: 97.1939% ( 7) 00:12:00.113 15073.280 - 15132.858: 97.2497% ( 7) 00:12:00.113 15132.858 - 15192.436: 97.2975% ( 6) 00:12:00.113 15192.436 - 15252.015: 97.3533% ( 7) 00:12:00.113 15252.015 - 15371.171: 97.4251% ( 9) 00:12:00.113 15371.171 - 15490.327: 97.4809% ( 7) 00:12:00.113 15490.327 - 15609.484: 97.5606% ( 10) 00:12:00.113 15609.484 - 15728.640: 97.6323% ( 9) 00:12:00.113 15728.640 - 15847.796: 97.7121% ( 10) 00:12:00.113 15847.796 - 15966.953: 97.7918% ( 10) 00:12:00.113 15966.953 - 16086.109: 97.8874% ( 12) 00:12:00.113 16086.109 - 16205.265: 97.9672% ( 10) 00:12:00.113 16205.265 - 16324.422: 98.0469% ( 10) 00:12:00.113 16324.422 - 16443.578: 98.1186% ( 9) 00:12:00.113 16443.578 - 16562.735: 98.1904% ( 9) 00:12:00.113 16562.735 - 16681.891: 98.2701% ( 10) 00:12:00.113 16681.891 - 16801.047: 98.3339% ( 8) 00:12:00.113 16801.047 - 16920.204: 98.3897% ( 7) 00:12:00.113 16920.204 - 17039.360: 98.4295% ( 5) 00:12:00.113 17039.360 - 17158.516: 98.4694% ( 5) 00:12:00.113 17158.516 - 17277.673: 98.4933% ( 3) 00:12:00.113 17277.673 - 17396.829: 98.5332% ( 5) 00:12:00.113 17396.829 - 17515.985: 98.5730% ( 5) 00:12:00.113 17515.985 - 17635.142: 98.6129% ( 5) 00:12:00.113 17635.142 - 17754.298: 98.6527% ( 5) 00:12:00.113 17754.298 - 17873.455: 98.6926% ( 5) 00:12:00.113 17873.455 - 17992.611: 98.7325% ( 5) 00:12:00.113 17992.611 - 18111.767: 98.7723% ( 5) 00:12:00.113 18111.767 - 18230.924: 98.8122% ( 5) 00:12:00.113 18230.924 - 18350.080: 98.8520% ( 5) 00:12:00.113 18350.080 - 18469.236: 98.8839% ( 4) 00:12:00.113 18469.236 - 18588.393: 98.9238% ( 5) 00:12:00.113 18588.393 - 18707.549: 98.9636% ( 5) 00:12:00.113 18707.549 - 18826.705: 98.9796% ( 2) 00:12:00.113 19899.113 - 20018.269: 98.9876% ( 1) 00:12:00.113 20018.269 - 20137.425: 99.0195% ( 4) 00:12:00.113 20137.425 - 20256.582: 99.0354% ( 2) 00:12:00.113 20256.582 - 20375.738: 99.0593% ( 3) 00:12:00.113 20375.738 - 20494.895: 99.0912% ( 4) 00:12:00.113 20494.895 - 20614.051: 99.1151% ( 3) 00:12:00.113 20614.051 - 20733.207: 99.1390% ( 3) 00:12:00.113 20733.207 - 20852.364: 99.1629% ( 3) 00:12:00.113 20852.364 - 20971.520: 99.1869% ( 3) 00:12:00.113 20971.520 - 21090.676: 99.2108% ( 3) 00:12:00.113 21090.676 - 21209.833: 99.2427% ( 4) 00:12:00.113 21209.833 - 21328.989: 99.2666% ( 3) 00:12:00.113 21328.989 - 21448.145: 99.2905% ( 3) 00:12:00.113 21448.145 - 21567.302: 99.3144% ( 3) 00:12:00.113 21567.302 - 21686.458: 99.3383% ( 3) 00:12:00.113 21686.458 - 21805.615: 99.3622% ( 3) 00:12:00.113 21805.615 - 21924.771: 99.3862% ( 3) 00:12:00.113 21924.771 - 22043.927: 99.4180% ( 4) 00:12:00.113 22043.927 - 22163.084: 99.4420% ( 3) 00:12:00.113 22163.084 - 22282.240: 99.4659% ( 3) 00:12:00.113 22282.240 - 22401.396: 99.4898% ( 3) 00:12:00.113 22401.396 - 22520.553: 99.5217% ( 4) 00:12:00.113 22520.553 - 22639.709: 99.5456% ( 3) 00:12:00.113 22639.709 - 22758.865: 99.5695% ( 3) 00:12:00.113 22758.865 - 22878.022: 99.5934% ( 3) 00:12:00.113 22878.022 - 22997.178: 99.6173% ( 3) 00:12:00.113 22997.178 - 23116.335: 99.6413% ( 3) 00:12:00.113 23116.335 - 23235.491: 99.6652% ( 3) 00:12:00.113 23235.491 - 23354.647: 99.6971% ( 4) 00:12:00.113 23354.647 - 23473.804: 99.7210% ( 3) 00:12:00.113 23473.804 - 23592.960: 99.7449% ( 3) 00:12:00.113 23592.960 - 23712.116: 99.7688% ( 3) 00:12:00.113 23712.116 - 23831.273: 99.7927% ( 3) 00:12:00.113 23831.273 - 23950.429: 99.8166% ( 3) 00:12:00.113 23950.429 - 24069.585: 99.8406% ( 3) 00:12:00.113 24069.585 - 24188.742: 99.8645% ( 3) 00:12:00.113 24188.742 - 24307.898: 99.8884% ( 3) 00:12:00.114 24307.898 - 24427.055: 99.9203% ( 4) 00:12:00.114 24427.055 - 24546.211: 99.9442% ( 3) 00:12:00.114 24546.211 - 24665.367: 99.9681% ( 3) 00:12:00.114 24665.367 - 24784.524: 99.9920% ( 3) 00:12:00.114 24784.524 - 24903.680: 100.0000% ( 1) 00:12:00.114 00:12:00.114 12:33:09 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:01.493 Initializing NVMe Controllers 00:12:01.493 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:01.493 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:01.493 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:01.493 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:01.493 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:01.493 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:01.493 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:01.493 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:01.494 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:01.494 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:01.494 Initialization complete. Launching workers. 00:12:01.494 ======================================================== 00:12:01.494 Latency(us) 00:12:01.494 Device Information : IOPS MiB/s Average min max 00:12:01.494 PCIE (0000:00:06.0) NSID 1 from core 0: 6855.81 80.34 18663.13 11962.24 40232.17 00:12:01.494 PCIE (0000:00:07.0) NSID 1 from core 0: 6855.81 80.34 18651.16 12071.96 38341.84 00:12:01.494 PCIE (0000:00:09.0) NSID 1 from core 0: 6855.81 80.34 18638.35 12118.74 37870.11 00:12:01.494 PCIE (0000:00:08.0) NSID 1 from core 0: 6855.81 80.34 18624.85 12206.68 35641.56 00:12:01.494 PCIE (0000:00:08.0) NSID 2 from core 0: 6855.81 80.34 18612.76 12583.43 33845.19 00:12:01.494 PCIE (0000:00:08.0) NSID 3 from core 0: 6855.81 80.34 18601.15 12993.07 31969.11 00:12:01.494 ======================================================== 00:12:01.494 Total : 41134.86 482.05 18631.90 11962.24 40232.17 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13047.622us 00:12:01.494 10.00000% : 15609.484us 00:12:01.494 25.00000% : 16443.578us 00:12:01.494 50.00000% : 18469.236us 00:12:01.494 75.00000% : 20256.582us 00:12:01.494 90.00000% : 21448.145us 00:12:01.494 95.00000% : 21805.615us 00:12:01.494 98.00000% : 26095.244us 00:12:01.494 99.00000% : 37176.785us 00:12:01.494 99.50000% : 38844.975us 00:12:01.494 99.90000% : 40036.538us 00:12:01.494 99.99000% : 40274.851us 00:12:01.494 99.99900% : 40274.851us 00:12:01.494 99.99990% : 40274.851us 00:12:01.494 99.99999% : 40274.851us 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13166.778us 00:12:01.494 10.00000% : 16443.578us 00:12:01.494 25.00000% : 17158.516us 00:12:01.494 50.00000% : 18350.080us 00:12:01.494 75.00000% : 19899.113us 00:12:01.494 90.00000% : 20852.364us 00:12:01.494 95.00000% : 21328.989us 00:12:01.494 98.00000% : 25261.149us 00:12:01.494 99.00000% : 35508.596us 00:12:01.494 99.50000% : 36938.473us 00:12:01.494 99.90000% : 38130.036us 00:12:01.494 99.99000% : 38368.349us 00:12:01.494 99.99900% : 38368.349us 00:12:01.494 99.99990% : 38368.349us 00:12:01.494 99.99999% : 38368.349us 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13047.622us 00:12:01.494 10.00000% : 16443.578us 00:12:01.494 25.00000% : 17158.516us 00:12:01.494 50.00000% : 18350.080us 00:12:01.494 75.00000% : 19779.956us 00:12:01.494 90.00000% : 20852.364us 00:12:01.494 95.00000% : 21328.989us 00:12:01.494 98.00000% : 25022.836us 00:12:01.494 99.00000% : 34555.345us 00:12:01.494 99.50000% : 36461.847us 00:12:01.494 99.90000% : 37653.411us 00:12:01.494 99.99000% : 37891.724us 00:12:01.494 99.99900% : 37891.724us 00:12:01.494 99.99990% : 37891.724us 00:12:01.494 99.99999% : 37891.724us 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13166.778us 00:12:01.494 10.00000% : 16443.578us 00:12:01.494 25.00000% : 17277.673us 00:12:01.494 50.00000% : 18350.080us 00:12:01.494 75.00000% : 19779.956us 00:12:01.494 90.00000% : 20852.364us 00:12:01.494 95.00000% : 21328.989us 00:12:01.494 98.00000% : 25618.618us 00:12:01.494 99.00000% : 32887.156us 00:12:01.494 99.50000% : 34317.033us 00:12:01.494 99.90000% : 35508.596us 00:12:01.494 99.99000% : 35746.909us 00:12:01.494 99.99900% : 35746.909us 00:12:01.494 99.99990% : 35746.909us 00:12:01.494 99.99999% : 35746.909us 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13405.091us 00:12:01.494 10.00000% : 16443.578us 00:12:01.494 25.00000% : 17158.516us 00:12:01.494 50.00000% : 18350.080us 00:12:01.494 75.00000% : 19899.113us 00:12:01.494 90.00000% : 20852.364us 00:12:01.494 95.00000% : 21328.989us 00:12:01.494 98.00000% : 25618.618us 00:12:01.494 99.00000% : 31218.967us 00:12:01.494 99.50000% : 32648.844us 00:12:01.494 99.90000% : 33840.407us 00:12:01.494 99.99000% : 34078.720us 00:12:01.494 99.99900% : 34078.720us 00:12:01.494 99.99990% : 34078.720us 00:12:01.494 99.99999% : 34078.720us 00:12:01.494 00:12:01.494 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:12:01.494 ================================================================================= 00:12:01.494 1.00000% : 13881.716us 00:12:01.494 10.00000% : 16443.578us 00:12:01.494 25.00000% : 17277.673us 00:12:01.494 50.00000% : 18350.080us 00:12:01.494 75.00000% : 19779.956us 00:12:01.494 90.00000% : 20852.364us 00:12:01.494 95.00000% : 21209.833us 00:12:01.494 98.00000% : 25618.618us 00:12:01.494 99.00000% : 29074.153us 00:12:01.494 99.50000% : 30504.029us 00:12:01.494 99.90000% : 31933.905us 00:12:01.494 99.99000% : 32172.218us 00:12:01.494 99.99900% : 32172.218us 00:12:01.494 99.99990% : 32172.218us 00:12:01.494 99.99999% : 32172.218us 00:12:01.494 00:12:01.494 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:01.494 ============================================================================== 00:12:01.494 Range in us Cumulative IO count 00:12:01.494 11915.636 - 11975.215: 0.1157% ( 8) 00:12:01.494 11975.215 - 12034.793: 0.2170% ( 7) 00:12:01.494 12034.793 - 12094.371: 0.2459% ( 2) 00:12:01.494 12094.371 - 12153.949: 0.2894% ( 3) 00:12:01.494 12153.949 - 12213.527: 0.3038% ( 1) 00:12:01.494 12213.527 - 12273.105: 0.3183% ( 1) 00:12:01.494 12273.105 - 12332.684: 0.3328% ( 1) 00:12:01.494 12332.684 - 12392.262: 0.3472% ( 1) 00:12:01.494 12392.262 - 12451.840: 0.3762% ( 2) 00:12:01.494 12451.840 - 12511.418: 0.4051% ( 2) 00:12:01.494 12511.418 - 12570.996: 0.4485% ( 3) 00:12:01.494 12570.996 - 12630.575: 0.4774% ( 2) 00:12:01.494 12630.575 - 12690.153: 0.4919% ( 1) 00:12:01.494 12690.153 - 12749.731: 0.5642% ( 5) 00:12:01.494 12749.731 - 12809.309: 0.6510% ( 6) 00:12:01.494 12809.309 - 12868.887: 0.8391% ( 13) 00:12:01.494 12868.887 - 12928.465: 0.9259% ( 6) 00:12:01.494 12928.465 - 12988.044: 0.9838% ( 4) 00:12:01.494 12988.044 - 13047.622: 1.0417% ( 4) 00:12:01.494 13047.622 - 13107.200: 1.1285% ( 6) 00:12:01.494 13107.200 - 13166.778: 1.1719% ( 3) 00:12:01.494 13166.778 - 13226.356: 1.2297% ( 4) 00:12:01.494 13226.356 - 13285.935: 1.2587% ( 2) 00:12:01.494 13285.935 - 13345.513: 1.3021% ( 3) 00:12:01.494 13345.513 - 13405.091: 1.3310% ( 2) 00:12:01.494 13405.091 - 13464.669: 1.3600% ( 2) 00:12:01.494 13464.669 - 13524.247: 1.4468% ( 6) 00:12:01.494 13524.247 - 13583.825: 1.5046% ( 4) 00:12:01.494 13583.825 - 13643.404: 1.5770% ( 5) 00:12:01.494 13643.404 - 13702.982: 1.6204% ( 3) 00:12:01.494 13702.982 - 13762.560: 1.6927% ( 5) 00:12:01.494 13762.560 - 13822.138: 1.7650% ( 5) 00:12:01.494 13822.138 - 13881.716: 1.8374% ( 5) 00:12:01.494 13881.716 - 13941.295: 1.9097% ( 5) 00:12:01.494 13941.295 - 14000.873: 2.0255% ( 8) 00:12:01.494 14000.873 - 14060.451: 2.1123% ( 6) 00:12:01.494 14060.451 - 14120.029: 2.1991% ( 6) 00:12:01.494 14120.029 - 14179.607: 2.3003% ( 7) 00:12:01.494 14179.607 - 14239.185: 2.4306% ( 9) 00:12:01.494 14239.185 - 14298.764: 2.6042% ( 12) 00:12:01.494 14298.764 - 14358.342: 2.6765% ( 5) 00:12:01.494 14358.342 - 14417.920: 2.7488% ( 5) 00:12:01.494 14417.920 - 14477.498: 2.8791% ( 9) 00:12:01.494 14477.498 - 14537.076: 2.9514% ( 5) 00:12:01.494 14537.076 - 14596.655: 2.9803% ( 2) 00:12:01.494 14596.655 - 14656.233: 3.1250% ( 10) 00:12:01.494 14656.233 - 14715.811: 3.1539% ( 2) 00:12:01.494 14715.811 - 14775.389: 3.2697% ( 8) 00:12:01.494 14775.389 - 14834.967: 3.3565% ( 6) 00:12:01.494 14834.967 - 14894.545: 3.4578% ( 7) 00:12:01.494 14894.545 - 14954.124: 3.5590% ( 7) 00:12:01.494 14954.124 - 15013.702: 3.7037% ( 10) 00:12:01.494 15013.702 - 15073.280: 3.7760% ( 5) 00:12:01.494 15073.280 - 15132.858: 4.0220% ( 17) 00:12:01.494 15132.858 - 15192.436: 4.1667% ( 10) 00:12:01.494 15192.436 - 15252.015: 4.5573% ( 27) 00:12:01.494 15252.015 - 15371.171: 5.7147% ( 80) 00:12:01.494 15371.171 - 15490.327: 7.3640% ( 114) 00:12:01.494 15490.327 - 15609.484: 10.0839% ( 188) 00:12:01.494 15609.484 - 15728.640: 12.3264% ( 155) 00:12:01.494 15728.640 - 15847.796: 14.6123% ( 158) 00:12:01.494 15847.796 - 15966.953: 17.2020% ( 179) 00:12:01.494 15966.953 - 16086.109: 19.0538% ( 128) 00:12:01.494 16086.109 - 16205.265: 21.2963% ( 155) 00:12:01.494 16205.265 - 16324.422: 23.5243% ( 154) 00:12:01.494 16324.422 - 16443.578: 25.2894% ( 122) 00:12:01.494 16443.578 - 16562.735: 27.1846% ( 131) 00:12:01.494 16562.735 - 16681.891: 28.9352% ( 121) 00:12:01.494 16681.891 - 16801.047: 30.7292% ( 124) 00:12:01.494 16801.047 - 16920.204: 32.0747% ( 93) 00:12:01.494 16920.204 - 17039.360: 33.4346% ( 94) 00:12:01.494 17039.360 - 17158.516: 34.7512% ( 91) 00:12:01.494 17158.516 - 17277.673: 36.0243% ( 88) 00:12:01.495 17277.673 - 17396.829: 37.3843% ( 94) 00:12:01.495 17396.829 - 17515.985: 38.6863% ( 90) 00:12:01.495 17515.985 - 17635.142: 40.1910% ( 104) 00:12:01.495 17635.142 - 17754.298: 41.7824% ( 110) 00:12:01.495 17754.298 - 17873.455: 43.2581% ( 102) 00:12:01.495 17873.455 - 17992.611: 44.9219% ( 115) 00:12:01.495 17992.611 - 18111.767: 46.4844% ( 108) 00:12:01.495 18111.767 - 18230.924: 48.0324% ( 107) 00:12:01.495 18230.924 - 18350.080: 49.5226% ( 103) 00:12:01.495 18350.080 - 18469.236: 51.1863% ( 115) 00:12:01.495 18469.236 - 18588.393: 52.9514% ( 122) 00:12:01.495 18588.393 - 18707.549: 54.4994% ( 107) 00:12:01.495 18707.549 - 18826.705: 56.0764% ( 109) 00:12:01.495 18826.705 - 18945.862: 57.6823% ( 111) 00:12:01.495 18945.862 - 19065.018: 59.2159% ( 106) 00:12:01.495 19065.018 - 19184.175: 60.7928% ( 109) 00:12:01.495 19184.175 - 19303.331: 62.5000% ( 118) 00:12:01.495 19303.331 - 19422.487: 64.0191% ( 105) 00:12:01.495 19422.487 - 19541.644: 65.4369% ( 98) 00:12:01.495 19541.644 - 19660.800: 67.1007% ( 115) 00:12:01.495 19660.800 - 19779.956: 68.7500% ( 114) 00:12:01.495 19779.956 - 19899.113: 70.2836% ( 106) 00:12:01.495 19899.113 - 20018.269: 71.9184% ( 113) 00:12:01.495 20018.269 - 20137.425: 73.4954% ( 109) 00:12:01.495 20137.425 - 20256.582: 75.2025% ( 118) 00:12:01.495 20256.582 - 20375.738: 76.7650% ( 108) 00:12:01.495 20375.738 - 20494.895: 78.2552% ( 103) 00:12:01.495 20494.895 - 20614.051: 80.0203% ( 122) 00:12:01.495 20614.051 - 20733.207: 81.5249% ( 104) 00:12:01.495 20733.207 - 20852.364: 83.1019% ( 109) 00:12:01.495 20852.364 - 20971.520: 84.8669% ( 122) 00:12:01.495 20971.520 - 21090.676: 86.4583% ( 110) 00:12:01.495 21090.676 - 21209.833: 88.0642% ( 111) 00:12:01.495 21209.833 - 21328.989: 89.6557% ( 110) 00:12:01.495 21328.989 - 21448.145: 91.3194% ( 115) 00:12:01.495 21448.145 - 21567.302: 92.8530% ( 106) 00:12:01.495 21567.302 - 21686.458: 94.1985% ( 93) 00:12:01.495 21686.458 - 21805.615: 95.2836% ( 75) 00:12:01.495 21805.615 - 21924.771: 96.0214% ( 51) 00:12:01.495 21924.771 - 22043.927: 96.3542% ( 23) 00:12:01.495 22043.927 - 22163.084: 96.5712% ( 15) 00:12:01.495 22163.084 - 22282.240: 96.7882% ( 15) 00:12:01.495 22282.240 - 22401.396: 96.9039% ( 8) 00:12:01.495 22401.396 - 22520.553: 96.9907% ( 6) 00:12:01.495 22520.553 - 22639.709: 97.0920% ( 7) 00:12:01.495 22639.709 - 22758.865: 97.1209% ( 2) 00:12:01.495 22758.865 - 22878.022: 97.1644% ( 3) 00:12:01.495 22878.022 - 22997.178: 97.1788% ( 1) 00:12:01.495 22997.178 - 23116.335: 97.2222% ( 3) 00:12:01.495 23116.335 - 23235.491: 97.2512% ( 2) 00:12:01.495 23235.491 - 23354.647: 97.2656% ( 1) 00:12:01.495 23354.647 - 23473.804: 97.3235% ( 4) 00:12:01.495 23473.804 - 23592.960: 97.3380% ( 1) 00:12:01.495 23592.960 - 23712.116: 97.3814% ( 3) 00:12:01.495 23712.116 - 23831.273: 97.3958% ( 1) 00:12:01.495 23831.273 - 23950.429: 97.4392% ( 3) 00:12:01.495 23950.429 - 24069.585: 97.4682% ( 2) 00:12:01.495 24069.585 - 24188.742: 97.4971% ( 2) 00:12:01.495 24188.742 - 24307.898: 97.5260% ( 2) 00:12:01.495 24307.898 - 24427.055: 97.5550% ( 2) 00:12:01.495 24427.055 - 24546.211: 97.5694% ( 1) 00:12:01.495 24546.211 - 24665.367: 97.6418% ( 5) 00:12:01.495 24784.524 - 24903.680: 97.6852% ( 3) 00:12:01.495 24903.680 - 25022.836: 97.6997% ( 1) 00:12:01.495 25022.836 - 25141.993: 97.7575% ( 4) 00:12:01.495 25141.993 - 25261.149: 97.7720% ( 1) 00:12:01.495 25261.149 - 25380.305: 97.8154% ( 3) 00:12:01.495 25380.305 - 25499.462: 97.8443% ( 2) 00:12:01.495 25499.462 - 25618.618: 97.8733% ( 2) 00:12:01.495 25618.618 - 25737.775: 97.9022% ( 2) 00:12:01.495 25737.775 - 25856.931: 97.9311% ( 2) 00:12:01.495 25856.931 - 25976.087: 97.9456% ( 1) 00:12:01.495 25976.087 - 26095.244: 98.0035% ( 4) 00:12:01.495 26095.244 - 26214.400: 98.0179% ( 1) 00:12:01.495 26214.400 - 26333.556: 98.0613% ( 3) 00:12:01.495 26333.556 - 26452.713: 98.0903% ( 2) 00:12:01.495 26452.713 - 26571.869: 98.1192% ( 2) 00:12:01.495 26571.869 - 26691.025: 98.1481% ( 2) 00:12:01.495 34078.720 - 34317.033: 98.2060% ( 4) 00:12:01.495 34317.033 - 34555.345: 98.2639% ( 4) 00:12:01.495 34555.345 - 34793.658: 98.3362% ( 5) 00:12:01.495 34793.658 - 35031.971: 98.4086% ( 5) 00:12:01.495 35031.971 - 35270.284: 98.4809% ( 5) 00:12:01.495 35270.284 - 35508.596: 98.5532% ( 5) 00:12:01.495 35508.596 - 35746.909: 98.6256% ( 5) 00:12:01.495 35746.909 - 35985.222: 98.6979% ( 5) 00:12:01.495 35985.222 - 36223.535: 98.7558% ( 4) 00:12:01.495 36223.535 - 36461.847: 98.8281% ( 5) 00:12:01.495 36461.847 - 36700.160: 98.8860% ( 4) 00:12:01.495 36700.160 - 36938.473: 98.9873% ( 7) 00:12:01.495 36938.473 - 37176.785: 99.0307% ( 3) 00:12:01.495 37176.785 - 37415.098: 99.1175% ( 6) 00:12:01.495 37415.098 - 37653.411: 99.1898% ( 5) 00:12:01.495 37653.411 - 37891.724: 99.2622% ( 5) 00:12:01.495 37891.724 - 38130.036: 99.3345% ( 5) 00:12:01.495 38130.036 - 38368.349: 99.4213% ( 6) 00:12:01.495 38368.349 - 38606.662: 99.4792% ( 4) 00:12:01.495 38606.662 - 38844.975: 99.5515% ( 5) 00:12:01.495 38844.975 - 39083.287: 99.6383% ( 6) 00:12:01.495 39083.287 - 39321.600: 99.7251% ( 6) 00:12:01.495 39321.600 - 39559.913: 99.7830% ( 4) 00:12:01.495 39559.913 - 39798.225: 99.8698% ( 6) 00:12:01.495 39798.225 - 40036.538: 99.9421% ( 5) 00:12:01.495 40036.538 - 40274.851: 100.0000% ( 4) 00:12:01.495 00:12:01.495 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:12:01.495 ============================================================================== 00:12:01.495 Range in us Cumulative IO count 00:12:01.495 12034.793 - 12094.371: 0.0434% ( 3) 00:12:01.495 12094.371 - 12153.949: 0.1013% ( 4) 00:12:01.495 12153.949 - 12213.527: 0.1447% ( 3) 00:12:01.495 12213.527 - 12273.105: 0.2025% ( 4) 00:12:01.495 12273.105 - 12332.684: 0.2459% ( 3) 00:12:01.495 12332.684 - 12392.262: 0.2894% ( 3) 00:12:01.495 12392.262 - 12451.840: 0.3472% ( 4) 00:12:01.495 12451.840 - 12511.418: 0.3906% ( 3) 00:12:01.495 12511.418 - 12570.996: 0.4485% ( 4) 00:12:01.495 12570.996 - 12630.575: 0.4774% ( 2) 00:12:01.495 12630.575 - 12690.153: 0.5498% ( 5) 00:12:01.495 12690.153 - 12749.731: 0.6076% ( 4) 00:12:01.495 12749.731 - 12809.309: 0.6366% ( 2) 00:12:01.495 12809.309 - 12868.887: 0.6800% ( 3) 00:12:01.495 12868.887 - 12928.465: 0.7089% ( 2) 00:12:01.495 12928.465 - 12988.044: 0.7812% ( 5) 00:12:01.495 12988.044 - 13047.622: 0.8681% ( 6) 00:12:01.495 13047.622 - 13107.200: 0.9549% ( 6) 00:12:01.495 13107.200 - 13166.778: 1.0417% ( 6) 00:12:01.495 13166.778 - 13226.356: 1.1140% ( 5) 00:12:01.495 13226.356 - 13285.935: 1.2008% ( 6) 00:12:01.495 13285.935 - 13345.513: 1.3021% ( 7) 00:12:01.495 13345.513 - 13405.091: 1.4034% ( 7) 00:12:01.495 13405.091 - 13464.669: 1.5046% ( 7) 00:12:01.495 13464.669 - 13524.247: 1.5480% ( 3) 00:12:01.495 13524.247 - 13583.825: 1.6348% ( 6) 00:12:01.495 13583.825 - 13643.404: 1.7072% ( 5) 00:12:01.495 13643.404 - 13702.982: 1.7650% ( 4) 00:12:01.495 13702.982 - 13762.560: 1.8084% ( 3) 00:12:01.495 13762.560 - 13822.138: 1.8808% ( 5) 00:12:01.495 13822.138 - 13881.716: 1.9387% ( 4) 00:12:01.495 13881.716 - 13941.295: 2.0110% ( 5) 00:12:01.495 13941.295 - 14000.873: 2.0978% ( 6) 00:12:01.495 14000.873 - 14060.451: 2.1701% ( 5) 00:12:01.495 14060.451 - 14120.029: 2.2425% ( 5) 00:12:01.495 14120.029 - 14179.607: 2.3148% ( 5) 00:12:01.495 14179.607 - 14239.185: 2.3872% ( 5) 00:12:01.495 14239.185 - 14298.764: 2.4740% ( 6) 00:12:01.495 14298.764 - 14358.342: 2.5463% ( 5) 00:12:01.495 14358.342 - 14417.920: 2.6331% ( 6) 00:12:01.495 14417.920 - 14477.498: 2.6910% ( 4) 00:12:01.495 14477.498 - 14537.076: 2.7633% ( 5) 00:12:01.495 14537.076 - 14596.655: 2.8501% ( 6) 00:12:01.495 14596.655 - 14656.233: 2.9225% ( 5) 00:12:01.495 14656.233 - 14715.811: 2.9948% ( 5) 00:12:01.495 14715.811 - 14775.389: 3.0671% ( 5) 00:12:01.495 14775.389 - 14834.967: 3.1539% ( 6) 00:12:01.495 14834.967 - 14894.545: 3.1973% ( 3) 00:12:01.495 14894.545 - 14954.124: 3.2407% ( 3) 00:12:01.495 14954.124 - 15013.702: 3.2841% ( 3) 00:12:01.495 15013.702 - 15073.280: 3.3275% ( 3) 00:12:01.495 15073.280 - 15132.858: 3.3565% ( 2) 00:12:01.495 15132.858 - 15192.436: 3.3999% ( 3) 00:12:01.495 15192.436 - 15252.015: 3.4433% ( 3) 00:12:01.495 15252.015 - 15371.171: 3.5301% ( 6) 00:12:01.495 15371.171 - 15490.327: 3.6024% ( 5) 00:12:01.495 15490.327 - 15609.484: 3.7905% ( 13) 00:12:01.495 15609.484 - 15728.640: 3.9931% ( 14) 00:12:01.495 15728.640 - 15847.796: 4.3837% ( 27) 00:12:01.495 15847.796 - 15966.953: 4.9334% ( 38) 00:12:01.495 15966.953 - 16086.109: 5.7726% ( 58) 00:12:01.495 16086.109 - 16205.265: 7.2483% ( 102) 00:12:01.495 16205.265 - 16324.422: 9.0133% ( 122) 00:12:01.495 16324.422 - 16443.578: 11.0966% ( 144) 00:12:01.495 16443.578 - 16562.735: 13.5272% ( 168) 00:12:01.495 16562.735 - 16681.891: 16.1748% ( 183) 00:12:01.495 16681.891 - 16801.047: 18.8947% ( 188) 00:12:01.495 16801.047 - 16920.204: 21.7159% ( 195) 00:12:01.495 16920.204 - 17039.360: 24.4647% ( 190) 00:12:01.495 17039.360 - 17158.516: 27.4595% ( 207) 00:12:01.495 17158.516 - 17277.673: 30.2807% ( 195) 00:12:01.495 17277.673 - 17396.829: 33.0006% ( 188) 00:12:01.495 17396.829 - 17515.985: 35.4890% ( 172) 00:12:01.495 17515.985 - 17635.142: 37.9196% ( 168) 00:12:01.495 17635.142 - 17754.298: 40.3067% ( 165) 00:12:01.495 17754.298 - 17873.455: 42.5637% ( 156) 00:12:01.495 17873.455 - 17992.611: 44.7917% ( 154) 00:12:01.495 17992.611 - 18111.767: 47.4971% ( 187) 00:12:01.495 18111.767 - 18230.924: 49.7251% ( 154) 00:12:01.495 18230.924 - 18350.080: 51.9676% ( 155) 00:12:01.495 18350.080 - 18469.236: 54.2679% ( 159) 00:12:01.495 18469.236 - 18588.393: 56.6985% ( 168) 00:12:01.495 18588.393 - 18707.549: 58.5938% ( 131) 00:12:01.495 18707.549 - 18826.705: 60.3154% ( 119) 00:12:01.495 18826.705 - 18945.862: 62.1383% ( 126) 00:12:01.495 18945.862 - 19065.018: 64.0191% ( 130) 00:12:01.495 19065.018 - 19184.175: 65.8275% ( 125) 00:12:01.496 19184.175 - 19303.331: 67.5781% ( 121) 00:12:01.496 19303.331 - 19422.487: 69.3576% ( 123) 00:12:01.496 19422.487 - 19541.644: 71.1661% ( 125) 00:12:01.496 19541.644 - 19660.800: 72.8733% ( 118) 00:12:01.496 19660.800 - 19779.956: 74.5660% ( 117) 00:12:01.496 19779.956 - 19899.113: 76.5625% ( 138) 00:12:01.496 19899.113 - 20018.269: 78.3131% ( 121) 00:12:01.496 20018.269 - 20137.425: 80.0781% ( 122) 00:12:01.496 20137.425 - 20256.582: 81.8576% ( 123) 00:12:01.496 20256.582 - 20375.738: 83.6806% ( 126) 00:12:01.496 20375.738 - 20494.895: 85.4311% ( 121) 00:12:01.496 20494.895 - 20614.051: 87.1817% ( 121) 00:12:01.496 20614.051 - 20733.207: 88.9178% ( 120) 00:12:01.496 20733.207 - 20852.364: 90.5961% ( 116) 00:12:01.496 20852.364 - 20971.520: 92.1875% ( 110) 00:12:01.496 20971.520 - 21090.676: 93.6053% ( 98) 00:12:01.496 21090.676 - 21209.833: 94.6904% ( 75) 00:12:01.496 21209.833 - 21328.989: 95.3848% ( 48) 00:12:01.496 21328.989 - 21448.145: 95.7755% ( 27) 00:12:01.496 21448.145 - 21567.302: 96.0648% ( 20) 00:12:01.496 21567.302 - 21686.458: 96.3108% ( 17) 00:12:01.496 21686.458 - 21805.615: 96.4410% ( 9) 00:12:01.496 21805.615 - 21924.771: 96.5712% ( 9) 00:12:01.496 21924.771 - 22043.927: 96.7014% ( 9) 00:12:01.496 22043.927 - 22163.084: 96.7737% ( 5) 00:12:01.496 22163.084 - 22282.240: 96.8316% ( 4) 00:12:01.496 22282.240 - 22401.396: 96.8895% ( 4) 00:12:01.496 22401.396 - 22520.553: 96.9618% ( 5) 00:12:01.496 22520.553 - 22639.709: 97.0052% ( 3) 00:12:01.496 22639.709 - 22758.865: 97.0486% ( 3) 00:12:01.496 22758.865 - 22878.022: 97.1065% ( 4) 00:12:01.496 22878.022 - 22997.178: 97.1499% ( 3) 00:12:01.496 22997.178 - 23116.335: 97.2512% ( 7) 00:12:01.496 23116.335 - 23235.491: 97.4103% ( 11) 00:12:01.496 23235.491 - 23354.647: 97.4682% ( 4) 00:12:01.496 23354.647 - 23473.804: 97.4971% ( 2) 00:12:01.496 23473.804 - 23592.960: 97.5260% ( 2) 00:12:01.496 23592.960 - 23712.116: 97.5550% ( 2) 00:12:01.496 23712.116 - 23831.273: 97.5984% ( 3) 00:12:01.496 23831.273 - 23950.429: 97.6273% ( 2) 00:12:01.496 23950.429 - 24069.585: 97.6707% ( 3) 00:12:01.496 24069.585 - 24188.742: 97.6997% ( 2) 00:12:01.496 24188.742 - 24307.898: 97.7286% ( 2) 00:12:01.496 24307.898 - 24427.055: 97.7720% ( 3) 00:12:01.496 24427.055 - 24546.211: 97.7865% ( 1) 00:12:01.496 24546.211 - 24665.367: 97.8299% ( 3) 00:12:01.496 24665.367 - 24784.524: 97.8733% ( 3) 00:12:01.496 24784.524 - 24903.680: 97.9022% ( 2) 00:12:01.496 24903.680 - 25022.836: 97.9456% ( 3) 00:12:01.496 25022.836 - 25141.993: 97.9745% ( 2) 00:12:01.496 25141.993 - 25261.149: 98.0035% ( 2) 00:12:01.496 25261.149 - 25380.305: 98.0469% ( 3) 00:12:01.496 25380.305 - 25499.462: 98.0758% ( 2) 00:12:01.496 25499.462 - 25618.618: 98.1047% ( 2) 00:12:01.496 25618.618 - 25737.775: 98.1337% ( 2) 00:12:01.496 25737.775 - 25856.931: 98.1481% ( 1) 00:12:01.496 32648.844 - 32887.156: 98.2060% ( 4) 00:12:01.496 32887.156 - 33125.469: 98.2928% ( 6) 00:12:01.496 33125.469 - 33363.782: 98.3652% ( 5) 00:12:01.496 33363.782 - 33602.095: 98.4520% ( 6) 00:12:01.496 33602.095 - 33840.407: 98.5243% ( 5) 00:12:01.496 33840.407 - 34078.720: 98.6111% ( 6) 00:12:01.496 34078.720 - 34317.033: 98.6979% ( 6) 00:12:01.496 34317.033 - 34555.345: 98.7703% ( 5) 00:12:01.496 34555.345 - 34793.658: 98.8426% ( 5) 00:12:01.496 34793.658 - 35031.971: 98.9149% ( 5) 00:12:01.496 35031.971 - 35270.284: 98.9873% ( 5) 00:12:01.496 35270.284 - 35508.596: 99.0596% ( 5) 00:12:01.496 35508.596 - 35746.909: 99.1464% ( 6) 00:12:01.496 35746.909 - 35985.222: 99.2043% ( 4) 00:12:01.496 35985.222 - 36223.535: 99.2766% ( 5) 00:12:01.496 36223.535 - 36461.847: 99.3634% ( 6) 00:12:01.496 36461.847 - 36700.160: 99.4358% ( 5) 00:12:01.496 36700.160 - 36938.473: 99.5226% ( 6) 00:12:01.496 36938.473 - 37176.785: 99.6094% ( 6) 00:12:01.496 37176.785 - 37415.098: 99.6962% ( 6) 00:12:01.496 37415.098 - 37653.411: 99.7685% ( 5) 00:12:01.496 37653.411 - 37891.724: 99.8553% ( 6) 00:12:01.496 37891.724 - 38130.036: 99.9277% ( 5) 00:12:01.496 38130.036 - 38368.349: 100.0000% ( 5) 00:12:01.496 00:12:01.496 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:12:01.496 ============================================================================== 00:12:01.496 Range in us Cumulative IO count 00:12:01.496 12094.371 - 12153.949: 0.0145% ( 1) 00:12:01.496 12153.949 - 12213.527: 0.0434% ( 2) 00:12:01.496 12213.527 - 12273.105: 0.1013% ( 4) 00:12:01.496 12273.105 - 12332.684: 0.1591% ( 4) 00:12:01.496 12332.684 - 12392.262: 0.2315% ( 5) 00:12:01.496 12392.262 - 12451.840: 0.2749% ( 3) 00:12:01.496 12451.840 - 12511.418: 0.3472% ( 5) 00:12:01.496 12511.418 - 12570.996: 0.4051% ( 4) 00:12:01.496 12570.996 - 12630.575: 0.4774% ( 5) 00:12:01.496 12630.575 - 12690.153: 0.5353% ( 4) 00:12:01.496 12690.153 - 12749.731: 0.5787% ( 3) 00:12:01.496 12749.731 - 12809.309: 0.6366% ( 4) 00:12:01.496 12809.309 - 12868.887: 0.7523% ( 8) 00:12:01.496 12868.887 - 12928.465: 0.8825% ( 9) 00:12:01.496 12928.465 - 12988.044: 0.9983% ( 8) 00:12:01.496 12988.044 - 13047.622: 1.0561% ( 4) 00:12:01.496 13047.622 - 13107.200: 1.0995% ( 3) 00:12:01.496 13107.200 - 13166.778: 1.1285% ( 2) 00:12:01.496 13166.778 - 13226.356: 1.1719% ( 3) 00:12:01.496 13226.356 - 13285.935: 1.2731% ( 7) 00:12:01.496 13285.935 - 13345.513: 1.3744% ( 7) 00:12:01.496 13345.513 - 13405.091: 1.4612% ( 6) 00:12:01.496 13405.091 - 13464.669: 1.5480% ( 6) 00:12:01.496 13464.669 - 13524.247: 1.6638% ( 8) 00:12:01.496 13524.247 - 13583.825: 1.7650% ( 7) 00:12:01.496 13583.825 - 13643.404: 1.8663% ( 7) 00:12:01.496 13643.404 - 13702.982: 1.9387% ( 5) 00:12:01.496 13702.982 - 13762.560: 2.0399% ( 7) 00:12:01.496 13762.560 - 13822.138: 2.1267% ( 6) 00:12:01.496 13822.138 - 13881.716: 2.2425% ( 8) 00:12:01.496 13881.716 - 13941.295: 2.3582% ( 8) 00:12:01.496 13941.295 - 14000.873: 2.4306% ( 5) 00:12:01.496 14000.873 - 14060.451: 2.4884% ( 4) 00:12:01.496 14060.451 - 14120.029: 2.5463% ( 4) 00:12:01.496 14120.029 - 14179.607: 2.6186% ( 5) 00:12:01.496 14179.607 - 14239.185: 2.6765% ( 4) 00:12:01.496 14239.185 - 14298.764: 2.7344% ( 4) 00:12:01.496 14298.764 - 14358.342: 2.7922% ( 4) 00:12:01.496 14358.342 - 14417.920: 2.8356% ( 3) 00:12:01.496 14417.920 - 14477.498: 2.9225% ( 6) 00:12:01.496 14477.498 - 14537.076: 2.9948% ( 5) 00:12:01.496 14537.076 - 14596.655: 3.0527% ( 4) 00:12:01.496 14596.655 - 14656.233: 3.1250% ( 5) 00:12:01.496 14656.233 - 14715.811: 3.1684% ( 3) 00:12:01.496 14715.811 - 14775.389: 3.2118% ( 3) 00:12:01.496 14775.389 - 14834.967: 3.2407% ( 2) 00:12:01.496 14834.967 - 14894.545: 3.2841% ( 3) 00:12:01.496 14894.545 - 14954.124: 3.3275% ( 3) 00:12:01.496 14954.124 - 15013.702: 3.3565% ( 2) 00:12:01.496 15013.702 - 15073.280: 3.3999% ( 3) 00:12:01.496 15073.280 - 15132.858: 3.4433% ( 3) 00:12:01.496 15132.858 - 15192.436: 3.4867% ( 3) 00:12:01.496 15192.436 - 15252.015: 3.5301% ( 3) 00:12:01.496 15252.015 - 15371.171: 3.6169% ( 6) 00:12:01.496 15371.171 - 15490.327: 3.6892% ( 5) 00:12:01.496 15490.327 - 15609.484: 3.8194% ( 9) 00:12:01.496 15609.484 - 15728.640: 4.0509% ( 16) 00:12:01.496 15728.640 - 15847.796: 4.3403% ( 20) 00:12:01.496 15847.796 - 15966.953: 4.9190% ( 40) 00:12:01.496 15966.953 - 16086.109: 5.6713% ( 52) 00:12:01.496 16086.109 - 16205.265: 6.7274% ( 73) 00:12:01.496 16205.265 - 16324.422: 8.1597% ( 99) 00:12:01.496 16324.422 - 16443.578: 10.0984% ( 134) 00:12:01.496 16443.578 - 16562.735: 12.3119% ( 153) 00:12:01.496 16562.735 - 16681.891: 14.8293% ( 174) 00:12:01.496 16681.891 - 16801.047: 17.3611% ( 175) 00:12:01.496 16801.047 - 16920.204: 20.0376% ( 185) 00:12:01.496 16920.204 - 17039.360: 22.8154% ( 192) 00:12:01.496 17039.360 - 17158.516: 25.6800% ( 198) 00:12:01.496 17158.516 - 17277.673: 28.5012% ( 195) 00:12:01.496 17277.673 - 17396.829: 31.1777% ( 185) 00:12:01.496 17396.829 - 17515.985: 33.9120% ( 189) 00:12:01.496 17515.985 - 17635.142: 36.5596% ( 183) 00:12:01.496 17635.142 - 17754.298: 39.0480% ( 172) 00:12:01.496 17754.298 - 17873.455: 41.5654% ( 174) 00:12:01.496 17873.455 - 17992.611: 44.1406% ( 178) 00:12:01.496 17992.611 - 18111.767: 46.6001% ( 170) 00:12:01.496 18111.767 - 18230.924: 49.6817% ( 213) 00:12:01.496 18230.924 - 18350.080: 52.6910% ( 208) 00:12:01.496 18350.080 - 18469.236: 55.5122% ( 195) 00:12:01.496 18469.236 - 18588.393: 57.3640% ( 128) 00:12:01.496 18588.393 - 18707.549: 59.2303% ( 129) 00:12:01.496 18707.549 - 18826.705: 60.9520% ( 119) 00:12:01.496 18826.705 - 18945.862: 62.7315% ( 123) 00:12:01.496 18945.862 - 19065.018: 64.4531% ( 119) 00:12:01.496 19065.018 - 19184.175: 66.2616% ( 125) 00:12:01.496 19184.175 - 19303.331: 68.0411% ( 123) 00:12:01.496 19303.331 - 19422.487: 69.8206% ( 123) 00:12:01.496 19422.487 - 19541.644: 71.5278% ( 118) 00:12:01.496 19541.644 - 19660.800: 73.4230% ( 131) 00:12:01.496 19660.800 - 19779.956: 75.1881% ( 122) 00:12:01.496 19779.956 - 19899.113: 77.0110% ( 126) 00:12:01.496 19899.113 - 20018.269: 78.8194% ( 125) 00:12:01.496 20018.269 - 20137.425: 80.5990% ( 123) 00:12:01.496 20137.425 - 20256.582: 82.4797% ( 130) 00:12:01.496 20256.582 - 20375.738: 84.2159% ( 120) 00:12:01.496 20375.738 - 20494.895: 85.8941% ( 116) 00:12:01.496 20494.895 - 20614.051: 87.6591% ( 122) 00:12:01.496 20614.051 - 20733.207: 89.2361% ( 109) 00:12:01.496 20733.207 - 20852.364: 90.9433% ( 118) 00:12:01.497 20852.364 - 20971.520: 92.4913% ( 107) 00:12:01.497 20971.520 - 21090.676: 93.8657% ( 95) 00:12:01.497 21090.676 - 21209.833: 94.8929% ( 71) 00:12:01.497 21209.833 - 21328.989: 95.6453% ( 52) 00:12:01.497 21328.989 - 21448.145: 96.1661% ( 36) 00:12:01.497 21448.145 - 21567.302: 96.5422% ( 26) 00:12:01.497 21567.302 - 21686.458: 96.7593% ( 15) 00:12:01.497 21686.458 - 21805.615: 96.9039% ( 10) 00:12:01.497 21805.615 - 21924.771: 97.0052% ( 7) 00:12:01.497 21924.771 - 22043.927: 97.1209% ( 8) 00:12:01.497 22043.927 - 22163.084: 97.2078% ( 6) 00:12:01.497 22163.084 - 22282.240: 97.2801% ( 5) 00:12:01.497 22282.240 - 22401.396: 97.3669% ( 6) 00:12:01.497 22401.396 - 22520.553: 97.3958% ( 2) 00:12:01.497 22520.553 - 22639.709: 97.4248% ( 2) 00:12:01.497 22639.709 - 22758.865: 97.4537% ( 2) 00:12:01.497 22758.865 - 22878.022: 97.4826% ( 2) 00:12:01.497 22878.022 - 22997.178: 97.4971% ( 1) 00:12:01.497 22997.178 - 23116.335: 97.5405% ( 3) 00:12:01.497 23116.335 - 23235.491: 97.5694% ( 2) 00:12:01.497 23235.491 - 23354.647: 97.5984% ( 2) 00:12:01.497 23354.647 - 23473.804: 97.6273% ( 2) 00:12:01.497 23473.804 - 23592.960: 97.6562% ( 2) 00:12:01.497 23592.960 - 23712.116: 97.6852% ( 2) 00:12:01.497 23712.116 - 23831.273: 97.7141% ( 2) 00:12:01.497 23831.273 - 23950.429: 97.7575% ( 3) 00:12:01.497 23950.429 - 24069.585: 97.7865% ( 2) 00:12:01.497 24069.585 - 24188.742: 97.8154% ( 2) 00:12:01.497 24188.742 - 24307.898: 97.8443% ( 2) 00:12:01.497 24307.898 - 24427.055: 97.8733% ( 2) 00:12:01.497 24427.055 - 24546.211: 97.9022% ( 2) 00:12:01.497 24546.211 - 24665.367: 97.9311% ( 2) 00:12:01.497 24665.367 - 24784.524: 97.9601% ( 2) 00:12:01.497 24784.524 - 24903.680: 97.9890% ( 2) 00:12:01.497 24903.680 - 25022.836: 98.0179% ( 2) 00:12:01.497 25022.836 - 25141.993: 98.0613% ( 3) 00:12:01.497 25141.993 - 25261.149: 98.0903% ( 2) 00:12:01.497 25261.149 - 25380.305: 98.1337% ( 3) 00:12:01.497 25380.305 - 25499.462: 98.1481% ( 1) 00:12:01.497 33125.469 - 33363.782: 98.3362% ( 13) 00:12:01.497 33363.782 - 33602.095: 98.7269% ( 27) 00:12:01.497 33602.095 - 33840.407: 98.8426% ( 8) 00:12:01.497 33840.407 - 34078.720: 98.9005% ( 4) 00:12:01.497 34078.720 - 34317.033: 98.9583% ( 4) 00:12:01.497 34317.033 - 34555.345: 99.0307% ( 5) 00:12:01.497 34555.345 - 34793.658: 99.0741% ( 3) 00:12:01.497 34793.658 - 35031.971: 99.1464% ( 5) 00:12:01.497 35031.971 - 35270.284: 99.1898% ( 3) 00:12:01.497 35270.284 - 35508.596: 99.2622% ( 5) 00:12:01.497 35508.596 - 35746.909: 99.3490% ( 6) 00:12:01.497 35746.909 - 35985.222: 99.4213% ( 5) 00:12:01.497 35985.222 - 36223.535: 99.4936% ( 5) 00:12:01.497 36223.535 - 36461.847: 99.5515% ( 4) 00:12:01.497 36461.847 - 36700.160: 99.6383% ( 6) 00:12:01.497 36700.160 - 36938.473: 99.7106% ( 5) 00:12:01.497 36938.473 - 37176.785: 99.7685% ( 4) 00:12:01.497 37176.785 - 37415.098: 99.8553% ( 6) 00:12:01.497 37415.098 - 37653.411: 99.9277% ( 5) 00:12:01.497 37653.411 - 37891.724: 100.0000% ( 5) 00:12:01.497 00:12:01.497 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:12:01.497 ============================================================================== 00:12:01.497 Range in us Cumulative IO count 00:12:01.497 12153.949 - 12213.527: 0.0145% ( 1) 00:12:01.497 12213.527 - 12273.105: 0.0289% ( 1) 00:12:01.497 12451.840 - 12511.418: 0.0723% ( 3) 00:12:01.497 12511.418 - 12570.996: 0.1447% ( 5) 00:12:01.497 12570.996 - 12630.575: 0.1881% ( 3) 00:12:01.497 12630.575 - 12690.153: 0.2459% ( 4) 00:12:01.497 12690.153 - 12749.731: 0.3183% ( 5) 00:12:01.497 12749.731 - 12809.309: 0.3762% ( 4) 00:12:01.497 12809.309 - 12868.887: 0.4630% ( 6) 00:12:01.497 12868.887 - 12928.465: 0.5498% ( 6) 00:12:01.497 12928.465 - 12988.044: 0.6800% ( 9) 00:12:01.497 12988.044 - 13047.622: 0.7812% ( 7) 00:12:01.497 13047.622 - 13107.200: 0.8970% ( 8) 00:12:01.497 13107.200 - 13166.778: 1.0417% ( 10) 00:12:01.497 13166.778 - 13226.356: 1.1429% ( 7) 00:12:01.497 13226.356 - 13285.935: 1.2587% ( 8) 00:12:01.497 13285.935 - 13345.513: 1.3744% ( 8) 00:12:01.497 13345.513 - 13405.091: 1.5770% ( 14) 00:12:01.497 13405.091 - 13464.669: 1.6638% ( 6) 00:12:01.497 13464.669 - 13524.247: 1.7795% ( 8) 00:12:01.497 13524.247 - 13583.825: 1.8663% ( 6) 00:12:01.497 13583.825 - 13643.404: 1.9531% ( 6) 00:12:01.497 13643.404 - 13702.982: 2.0399% ( 6) 00:12:01.497 13702.982 - 13762.560: 2.0978% ( 4) 00:12:01.497 13762.560 - 13822.138: 2.1557% ( 4) 00:12:01.497 13822.138 - 13881.716: 2.2280% ( 5) 00:12:01.497 13881.716 - 13941.295: 2.2859% ( 4) 00:12:01.497 13941.295 - 14000.873: 2.3582% ( 5) 00:12:01.497 14000.873 - 14060.451: 2.4306% ( 5) 00:12:01.497 14060.451 - 14120.029: 2.5029% ( 5) 00:12:01.497 14120.029 - 14179.607: 2.5608% ( 4) 00:12:01.497 14179.607 - 14239.185: 2.6331% ( 5) 00:12:01.497 14239.185 - 14298.764: 2.6910% ( 4) 00:12:01.497 14298.764 - 14358.342: 2.7344% ( 3) 00:12:01.497 14358.342 - 14417.920: 2.8067% ( 5) 00:12:01.497 14417.920 - 14477.498: 2.8791% ( 5) 00:12:01.497 14477.498 - 14537.076: 2.9369% ( 4) 00:12:01.497 14537.076 - 14596.655: 3.0093% ( 5) 00:12:01.497 14596.655 - 14656.233: 3.0816% ( 5) 00:12:01.497 14656.233 - 14715.811: 3.1539% ( 5) 00:12:01.497 14715.811 - 14775.389: 3.2407% ( 6) 00:12:01.497 14775.389 - 14834.967: 3.3131% ( 5) 00:12:01.497 14834.967 - 14894.545: 3.3854% ( 5) 00:12:01.497 14894.545 - 14954.124: 3.4288% ( 3) 00:12:01.497 14954.124 - 15013.702: 3.4722% ( 3) 00:12:01.497 15013.702 - 15073.280: 3.5156% ( 3) 00:12:01.497 15073.280 - 15132.858: 3.5590% ( 3) 00:12:01.497 15132.858 - 15192.436: 3.6024% ( 3) 00:12:01.497 15192.436 - 15252.015: 3.6458% ( 3) 00:12:01.497 15252.015 - 15371.171: 3.7037% ( 4) 00:12:01.497 15371.171 - 15490.327: 3.7326% ( 2) 00:12:01.497 15490.327 - 15609.484: 3.8050% ( 5) 00:12:01.497 15609.484 - 15728.640: 4.0075% ( 14) 00:12:01.497 15728.640 - 15847.796: 4.3692% ( 25) 00:12:01.497 15847.796 - 15966.953: 5.1649% ( 55) 00:12:01.497 15966.953 - 16086.109: 6.2645% ( 76) 00:12:01.497 16086.109 - 16205.265: 7.6823% ( 98) 00:12:01.497 16205.265 - 16324.422: 9.4329% ( 121) 00:12:01.497 16324.422 - 16443.578: 11.2269% ( 124) 00:12:01.497 16443.578 - 16562.735: 13.1944% ( 136) 00:12:01.497 16562.735 - 16681.891: 15.4369% ( 155) 00:12:01.497 16681.891 - 16801.047: 17.5926% ( 149) 00:12:01.497 16801.047 - 16920.204: 19.9363% ( 162) 00:12:01.497 16920.204 - 17039.360: 22.3380% ( 166) 00:12:01.497 17039.360 - 17158.516: 24.8553% ( 174) 00:12:01.497 17158.516 - 17277.673: 27.2280% ( 164) 00:12:01.497 17277.673 - 17396.829: 29.6441% ( 167) 00:12:01.497 17396.829 - 17515.985: 32.1615% ( 174) 00:12:01.497 17515.985 - 17635.142: 34.7946% ( 182) 00:12:01.497 17635.142 - 17754.298: 37.2251% ( 168) 00:12:01.497 17754.298 - 17873.455: 39.5833% ( 163) 00:12:01.497 17873.455 - 17992.611: 42.1152% ( 175) 00:12:01.497 17992.611 - 18111.767: 45.0231% ( 201) 00:12:01.497 18111.767 - 18230.924: 47.4392% ( 167) 00:12:01.497 18230.924 - 18350.080: 50.4919% ( 211) 00:12:01.497 18350.080 - 18469.236: 54.3981% ( 270) 00:12:01.497 18469.236 - 18588.393: 57.4074% ( 208) 00:12:01.497 18588.393 - 18707.549: 59.4329% ( 140) 00:12:01.497 18707.549 - 18826.705: 61.3860% ( 135) 00:12:01.497 18826.705 - 18945.862: 63.1366% ( 121) 00:12:01.497 18945.862 - 19065.018: 65.0752% ( 134) 00:12:01.497 19065.018 - 19184.175: 66.8258% ( 121) 00:12:01.497 19184.175 - 19303.331: 68.6632% ( 127) 00:12:01.497 19303.331 - 19422.487: 70.3993% ( 120) 00:12:01.497 19422.487 - 19541.644: 72.2512% ( 128) 00:12:01.497 19541.644 - 19660.800: 73.9583% ( 118) 00:12:01.497 19660.800 - 19779.956: 75.7378% ( 123) 00:12:01.497 19779.956 - 19899.113: 77.4306% ( 117) 00:12:01.497 19899.113 - 20018.269: 79.2245% ( 124) 00:12:01.497 20018.269 - 20137.425: 80.9606% ( 120) 00:12:01.497 20137.425 - 20256.582: 82.7112% ( 121) 00:12:01.497 20256.582 - 20375.738: 84.4184% ( 118) 00:12:01.497 20375.738 - 20494.895: 86.1400% ( 119) 00:12:01.497 20494.895 - 20614.051: 87.7315% ( 110) 00:12:01.497 20614.051 - 20733.207: 89.4242% ( 117) 00:12:01.497 20733.207 - 20852.364: 91.1024% ( 116) 00:12:01.497 20852.364 - 20971.520: 92.6071% ( 104) 00:12:01.497 20971.520 - 21090.676: 93.9381% ( 92) 00:12:01.497 21090.676 - 21209.833: 94.9942% ( 73) 00:12:01.497 21209.833 - 21328.989: 95.7755% ( 54) 00:12:01.497 21328.989 - 21448.145: 96.1950% ( 29) 00:12:01.497 21448.145 - 21567.302: 96.4844% ( 20) 00:12:01.497 21567.302 - 21686.458: 96.6435% ( 11) 00:12:01.497 21686.458 - 21805.615: 96.7882% ( 10) 00:12:01.497 21805.615 - 21924.771: 96.8605% ( 5) 00:12:01.497 21924.771 - 22043.927: 96.9184% ( 4) 00:12:01.497 22043.927 - 22163.084: 97.0052% ( 6) 00:12:01.497 22163.084 - 22282.240: 97.0486% ( 3) 00:12:01.497 22282.240 - 22401.396: 97.0920% ( 3) 00:12:01.497 22401.396 - 22520.553: 97.1209% ( 2) 00:12:01.497 22520.553 - 22639.709: 97.1499% ( 2) 00:12:01.497 22639.709 - 22758.865: 97.1788% ( 2) 00:12:01.497 22758.865 - 22878.022: 97.2078% ( 2) 00:12:01.497 22878.022 - 22997.178: 97.2367% ( 2) 00:12:01.497 22997.178 - 23116.335: 97.2656% ( 2) 00:12:01.497 23116.335 - 23235.491: 97.2946% ( 2) 00:12:01.497 23235.491 - 23354.647: 97.3235% ( 2) 00:12:01.497 23354.647 - 23473.804: 97.3524% ( 2) 00:12:01.497 23473.804 - 23592.960: 97.3958% ( 3) 00:12:01.497 23592.960 - 23712.116: 97.4248% ( 2) 00:12:01.497 23712.116 - 23831.273: 97.4537% ( 2) 00:12:01.497 23831.273 - 23950.429: 97.4826% ( 2) 00:12:01.497 23950.429 - 24069.585: 97.5116% ( 2) 00:12:01.497 24069.585 - 24188.742: 97.5550% ( 3) 00:12:01.497 24188.742 - 24307.898: 97.5839% ( 2) 00:12:01.497 24307.898 - 24427.055: 97.6273% ( 3) 00:12:01.497 24427.055 - 24546.211: 97.6562% ( 2) 00:12:01.497 24546.211 - 24665.367: 97.6997% ( 3) 00:12:01.497 24665.367 - 24784.524: 97.7431% ( 3) 00:12:01.497 24784.524 - 24903.680: 97.7720% ( 2) 00:12:01.497 24903.680 - 25022.836: 97.8154% ( 3) 00:12:01.497 25022.836 - 25141.993: 97.8588% ( 3) 00:12:01.497 25141.993 - 25261.149: 97.8877% ( 2) 00:12:01.498 25261.149 - 25380.305: 97.9311% ( 3) 00:12:01.498 25380.305 - 25499.462: 97.9601% ( 2) 00:12:01.498 25499.462 - 25618.618: 98.0035% ( 3) 00:12:01.498 25618.618 - 25737.775: 98.0324% ( 2) 00:12:01.498 25737.775 - 25856.931: 98.0758% ( 3) 00:12:01.498 25856.931 - 25976.087: 98.1192% ( 3) 00:12:01.498 25976.087 - 26095.244: 98.1481% ( 2) 00:12:01.498 31457.280 - 31695.593: 98.2205% ( 5) 00:12:01.498 31695.593 - 31933.905: 98.3507% ( 9) 00:12:01.498 31933.905 - 32172.218: 98.5388% ( 13) 00:12:01.498 32172.218 - 32410.531: 98.8860% ( 24) 00:12:01.498 32410.531 - 32648.844: 98.9873% ( 7) 00:12:01.498 32648.844 - 32887.156: 99.0596% ( 5) 00:12:01.498 32887.156 - 33125.469: 99.1319% ( 5) 00:12:01.498 33125.469 - 33363.782: 99.2188% ( 6) 00:12:01.498 33363.782 - 33602.095: 99.2911% ( 5) 00:12:01.498 33602.095 - 33840.407: 99.3634% ( 5) 00:12:01.498 33840.407 - 34078.720: 99.4502% ( 6) 00:12:01.498 34078.720 - 34317.033: 99.5370% ( 6) 00:12:01.498 34317.033 - 34555.345: 99.6238% ( 6) 00:12:01.498 34555.345 - 34793.658: 99.7106% ( 6) 00:12:01.498 34793.658 - 35031.971: 99.7830% ( 5) 00:12:01.498 35031.971 - 35270.284: 99.8698% ( 6) 00:12:01.498 35270.284 - 35508.596: 99.9421% ( 5) 00:12:01.498 35508.596 - 35746.909: 100.0000% ( 4) 00:12:01.498 00:12:01.498 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:12:01.498 ============================================================================== 00:12:01.498 Range in us Cumulative IO count 00:12:01.498 12570.996 - 12630.575: 0.0579% ( 4) 00:12:01.498 12630.575 - 12690.153: 0.1302% ( 5) 00:12:01.498 12690.153 - 12749.731: 0.1881% ( 4) 00:12:01.498 12749.731 - 12809.309: 0.2459% ( 4) 00:12:01.498 12809.309 - 12868.887: 0.3038% ( 4) 00:12:01.498 12868.887 - 12928.465: 0.3906% ( 6) 00:12:01.498 12928.465 - 12988.044: 0.4340% ( 3) 00:12:01.498 12988.044 - 13047.622: 0.5208% ( 6) 00:12:01.498 13047.622 - 13107.200: 0.6221% ( 7) 00:12:01.498 13107.200 - 13166.778: 0.7089% ( 6) 00:12:01.498 13166.778 - 13226.356: 0.8102% ( 7) 00:12:01.498 13226.356 - 13285.935: 0.8825% ( 5) 00:12:01.498 13285.935 - 13345.513: 0.9693% ( 6) 00:12:01.498 13345.513 - 13405.091: 1.0272% ( 4) 00:12:01.498 13405.091 - 13464.669: 1.0706% ( 3) 00:12:01.498 13464.669 - 13524.247: 1.1429% ( 5) 00:12:01.498 13524.247 - 13583.825: 1.2153% ( 5) 00:12:01.498 13583.825 - 13643.404: 1.3021% ( 6) 00:12:01.498 13643.404 - 13702.982: 1.3455% ( 3) 00:12:01.498 13702.982 - 13762.560: 1.4323% ( 6) 00:12:01.498 13762.560 - 13822.138: 1.5191% ( 6) 00:12:01.498 13822.138 - 13881.716: 1.5625% ( 3) 00:12:01.498 13881.716 - 13941.295: 1.6493% ( 6) 00:12:01.498 13941.295 - 14000.873: 1.7361% ( 6) 00:12:01.498 14000.873 - 14060.451: 1.7940% ( 4) 00:12:01.498 14060.451 - 14120.029: 1.8808% ( 6) 00:12:01.498 14120.029 - 14179.607: 1.9387% ( 4) 00:12:01.498 14179.607 - 14239.185: 2.0255% ( 6) 00:12:01.498 14239.185 - 14298.764: 2.1123% ( 6) 00:12:01.498 14298.764 - 14358.342: 2.2135% ( 7) 00:12:01.498 14358.342 - 14417.920: 2.3003% ( 6) 00:12:01.498 14417.920 - 14477.498: 2.3582% ( 4) 00:12:01.498 14477.498 - 14537.076: 2.4161% ( 4) 00:12:01.498 14537.076 - 14596.655: 2.5029% ( 6) 00:12:01.498 14596.655 - 14656.233: 2.5608% ( 4) 00:12:01.498 14656.233 - 14715.811: 2.6476% ( 6) 00:12:01.498 14715.811 - 14775.389: 2.7199% ( 5) 00:12:01.498 14775.389 - 14834.967: 2.7633% ( 3) 00:12:01.498 14834.967 - 14894.545: 2.8501% ( 6) 00:12:01.498 14894.545 - 14954.124: 2.9080% ( 4) 00:12:01.498 14954.124 - 15013.702: 2.9948% ( 6) 00:12:01.498 15013.702 - 15073.280: 3.0093% ( 1) 00:12:01.498 15073.280 - 15132.858: 3.0237% ( 1) 00:12:01.498 15132.858 - 15192.436: 3.0527% ( 2) 00:12:01.498 15192.436 - 15252.015: 3.0816% ( 2) 00:12:01.498 15252.015 - 15371.171: 3.1250% ( 3) 00:12:01.498 15371.171 - 15490.327: 3.1829% ( 4) 00:12:01.498 15490.327 - 15609.484: 3.2552% ( 5) 00:12:01.498 15609.484 - 15728.640: 3.3709% ( 8) 00:12:01.498 15728.640 - 15847.796: 3.7326% ( 25) 00:12:01.498 15847.796 - 15966.953: 4.4560% ( 50) 00:12:01.498 15966.953 - 16086.109: 5.3819% ( 64) 00:12:01.498 16086.109 - 16205.265: 6.6117% ( 85) 00:12:01.498 16205.265 - 16324.422: 8.3478% ( 120) 00:12:01.498 16324.422 - 16443.578: 10.3299% ( 137) 00:12:01.498 16443.578 - 16562.735: 12.6736% ( 162) 00:12:01.498 16562.735 - 16681.891: 15.0029% ( 161) 00:12:01.498 16681.891 - 16801.047: 17.6505% ( 183) 00:12:01.498 16801.047 - 16920.204: 20.1968% ( 176) 00:12:01.498 16920.204 - 17039.360: 22.9745% ( 192) 00:12:01.498 17039.360 - 17158.516: 25.8681% ( 200) 00:12:01.498 17158.516 - 17277.673: 28.5880% ( 188) 00:12:01.498 17277.673 - 17396.829: 31.2211% ( 182) 00:12:01.498 17396.829 - 17515.985: 33.7674% ( 176) 00:12:01.498 17515.985 - 17635.142: 36.2992% ( 175) 00:12:01.498 17635.142 - 17754.298: 38.7442% ( 169) 00:12:01.498 17754.298 - 17873.455: 41.3194% ( 178) 00:12:01.498 17873.455 - 17992.611: 43.6198% ( 159) 00:12:01.498 17992.611 - 18111.767: 46.0938% ( 171) 00:12:01.498 18111.767 - 18230.924: 48.5677% ( 171) 00:12:01.498 18230.924 - 18350.080: 51.3166% ( 190) 00:12:01.498 18350.080 - 18469.236: 54.4994% ( 220) 00:12:01.498 18469.236 - 18588.393: 56.5538% ( 142) 00:12:01.498 18588.393 - 18707.549: 58.6806% ( 147) 00:12:01.498 18707.549 - 18826.705: 60.5903% ( 132) 00:12:01.498 18826.705 - 18945.862: 62.4132% ( 126) 00:12:01.498 18945.862 - 19065.018: 64.2216% ( 125) 00:12:01.498 19065.018 - 19184.175: 66.0156% ( 124) 00:12:01.498 19184.175 - 19303.331: 67.8096% ( 124) 00:12:01.498 19303.331 - 19422.487: 69.5023% ( 117) 00:12:01.498 19422.487 - 19541.644: 71.2674% ( 122) 00:12:01.498 19541.644 - 19660.800: 73.1192% ( 128) 00:12:01.498 19660.800 - 19779.956: 74.8843% ( 122) 00:12:01.498 19779.956 - 19899.113: 76.5914% ( 118) 00:12:01.498 19899.113 - 20018.269: 78.4288% ( 127) 00:12:01.498 20018.269 - 20137.425: 80.2083% ( 123) 00:12:01.498 20137.425 - 20256.582: 81.9878% ( 123) 00:12:01.498 20256.582 - 20375.738: 83.7529% ( 122) 00:12:01.498 20375.738 - 20494.895: 85.5903% ( 127) 00:12:01.498 20494.895 - 20614.051: 87.4277% ( 127) 00:12:01.498 20614.051 - 20733.207: 89.1348% ( 118) 00:12:01.498 20733.207 - 20852.364: 90.7552% ( 112) 00:12:01.498 20852.364 - 20971.520: 92.1875% ( 99) 00:12:01.498 20971.520 - 21090.676: 93.5475% ( 94) 00:12:01.498 21090.676 - 21209.833: 94.5891% ( 72) 00:12:01.498 21209.833 - 21328.989: 95.4572% ( 60) 00:12:01.498 21328.989 - 21448.145: 95.9925% ( 37) 00:12:01.498 21448.145 - 21567.302: 96.3252% ( 23) 00:12:01.498 21567.302 - 21686.458: 96.5856% ( 18) 00:12:01.498 21686.458 - 21805.615: 96.7303% ( 10) 00:12:01.498 21805.615 - 21924.771: 96.8316% ( 7) 00:12:01.498 21924.771 - 22043.927: 96.9763% ( 10) 00:12:01.498 22043.927 - 22163.084: 97.0052% ( 2) 00:12:01.498 22163.084 - 22282.240: 97.0341% ( 2) 00:12:01.498 22282.240 - 22401.396: 97.0631% ( 2) 00:12:01.498 22401.396 - 22520.553: 97.0920% ( 2) 00:12:01.498 22520.553 - 22639.709: 97.1354% ( 3) 00:12:01.498 22639.709 - 22758.865: 97.1644% ( 2) 00:12:01.498 22758.865 - 22878.022: 97.1788% ( 1) 00:12:01.498 22878.022 - 22997.178: 97.2222% ( 3) 00:12:01.498 22997.178 - 23116.335: 97.2512% ( 2) 00:12:01.498 23116.335 - 23235.491: 97.2801% ( 2) 00:12:01.498 23235.491 - 23354.647: 97.3090% ( 2) 00:12:01.498 23354.647 - 23473.804: 97.3380% ( 2) 00:12:01.498 23473.804 - 23592.960: 97.3669% ( 2) 00:12:01.498 23592.960 - 23712.116: 97.3958% ( 2) 00:12:01.498 23712.116 - 23831.273: 97.4392% ( 3) 00:12:01.498 23831.273 - 23950.429: 97.4682% ( 2) 00:12:01.498 23950.429 - 24069.585: 97.5116% ( 3) 00:12:01.498 24069.585 - 24188.742: 97.5550% ( 3) 00:12:01.499 24188.742 - 24307.898: 97.5839% ( 2) 00:12:01.499 24307.898 - 24427.055: 97.6273% ( 3) 00:12:01.499 24427.055 - 24546.211: 97.6562% ( 2) 00:12:01.499 24546.211 - 24665.367: 97.6997% ( 3) 00:12:01.499 24665.367 - 24784.524: 97.7431% ( 3) 00:12:01.499 24784.524 - 24903.680: 97.7720% ( 2) 00:12:01.499 24903.680 - 25022.836: 97.8154% ( 3) 00:12:01.499 25022.836 - 25141.993: 97.8588% ( 3) 00:12:01.499 25141.993 - 25261.149: 97.8877% ( 2) 00:12:01.499 25261.149 - 25380.305: 97.9311% ( 3) 00:12:01.499 25380.305 - 25499.462: 97.9745% ( 3) 00:12:01.499 25499.462 - 25618.618: 98.0035% ( 2) 00:12:01.499 25618.618 - 25737.775: 98.0469% ( 3) 00:12:01.499 25737.775 - 25856.931: 98.0758% ( 2) 00:12:01.499 25856.931 - 25976.087: 98.1192% ( 3) 00:12:01.499 25976.087 - 26095.244: 98.1481% ( 2) 00:12:01.499 29074.153 - 29193.309: 98.3362% ( 13) 00:12:01.499 29193.309 - 29312.465: 98.4230% ( 6) 00:12:01.499 29312.465 - 29431.622: 98.4809% ( 4) 00:12:01.499 29431.622 - 29550.778: 98.5098% ( 2) 00:12:01.499 29550.778 - 29669.935: 98.5532% ( 3) 00:12:01.499 29669.935 - 29789.091: 98.5966% ( 3) 00:12:01.499 29789.091 - 29908.247: 98.6400% ( 3) 00:12:01.499 29908.247 - 30027.404: 98.6690% ( 2) 00:12:01.499 30027.404 - 30146.560: 98.7124% ( 3) 00:12:01.499 30146.560 - 30265.716: 98.7558% ( 3) 00:12:01.499 30265.716 - 30384.873: 98.7847% ( 2) 00:12:01.499 30384.873 - 30504.029: 98.8281% ( 3) 00:12:01.499 30504.029 - 30742.342: 98.9149% ( 6) 00:12:01.499 30742.342 - 30980.655: 98.9873% ( 5) 00:12:01.499 30980.655 - 31218.967: 99.0741% ( 6) 00:12:01.499 31218.967 - 31457.280: 99.1464% ( 5) 00:12:01.499 31457.280 - 31695.593: 99.2188% ( 5) 00:12:01.499 31695.593 - 31933.905: 99.3056% ( 6) 00:12:01.499 31933.905 - 32172.218: 99.3924% ( 6) 00:12:01.499 32172.218 - 32410.531: 99.4792% ( 6) 00:12:01.499 32410.531 - 32648.844: 99.5660% ( 6) 00:12:01.499 32648.844 - 32887.156: 99.6528% ( 6) 00:12:01.499 32887.156 - 33125.469: 99.7251% ( 5) 00:12:01.499 33125.469 - 33363.782: 99.8119% ( 6) 00:12:01.499 33363.782 - 33602.095: 99.8987% ( 6) 00:12:01.499 33602.095 - 33840.407: 99.9855% ( 6) 00:12:01.499 33840.407 - 34078.720: 100.0000% ( 1) 00:12:01.499 00:12:01.499 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:12:01.499 ============================================================================== 00:12:01.499 Range in us Cumulative IO count 00:12:01.499 12988.044 - 13047.622: 0.0579% ( 4) 00:12:01.499 13047.622 - 13107.200: 0.1157% ( 4) 00:12:01.499 13107.200 - 13166.778: 0.1591% ( 3) 00:12:01.499 13166.778 - 13226.356: 0.2170% ( 4) 00:12:01.499 13226.356 - 13285.935: 0.2604% ( 3) 00:12:01.499 13285.935 - 13345.513: 0.3328% ( 5) 00:12:01.499 13345.513 - 13405.091: 0.3906% ( 4) 00:12:01.499 13405.091 - 13464.669: 0.4340% ( 3) 00:12:01.499 13464.669 - 13524.247: 0.5353% ( 7) 00:12:01.499 13524.247 - 13583.825: 0.6221% ( 6) 00:12:01.499 13583.825 - 13643.404: 0.7089% ( 6) 00:12:01.499 13643.404 - 13702.982: 0.8102% ( 7) 00:12:01.499 13702.982 - 13762.560: 0.8970% ( 6) 00:12:01.499 13762.560 - 13822.138: 0.9983% ( 7) 00:12:01.499 13822.138 - 13881.716: 1.0851% ( 6) 00:12:01.499 13881.716 - 13941.295: 1.3600% ( 19) 00:12:01.499 13941.295 - 14000.873: 1.4323% ( 5) 00:12:01.499 14000.873 - 14060.451: 1.5046% ( 5) 00:12:01.499 14060.451 - 14120.029: 1.5770% ( 5) 00:12:01.499 14120.029 - 14179.607: 1.6204% ( 3) 00:12:01.499 14179.607 - 14239.185: 1.7072% ( 6) 00:12:01.499 14239.185 - 14298.764: 1.7795% ( 5) 00:12:01.499 14298.764 - 14358.342: 1.8519% ( 5) 00:12:01.499 14358.342 - 14417.920: 1.9242% ( 5) 00:12:01.499 14417.920 - 14477.498: 1.9965% ( 5) 00:12:01.499 14477.498 - 14537.076: 2.0978% ( 7) 00:12:01.499 14537.076 - 14596.655: 2.1557% ( 4) 00:12:01.499 14596.655 - 14656.233: 2.2280% ( 5) 00:12:01.499 14656.233 - 14715.811: 2.3148% ( 6) 00:12:01.499 14715.811 - 14775.389: 2.3727% ( 4) 00:12:01.499 14775.389 - 14834.967: 2.4306% ( 4) 00:12:01.499 14834.967 - 14894.545: 2.5029% ( 5) 00:12:01.499 14894.545 - 14954.124: 2.5608% ( 4) 00:12:01.499 14954.124 - 15013.702: 2.6186% ( 4) 00:12:01.499 15013.702 - 15073.280: 2.6765% ( 4) 00:12:01.499 15073.280 - 15132.858: 2.7199% ( 3) 00:12:01.499 15132.858 - 15192.436: 2.7488% ( 2) 00:12:01.499 15192.436 - 15252.015: 2.7633% ( 1) 00:12:01.499 15252.015 - 15371.171: 2.8067% ( 3) 00:12:01.499 15371.171 - 15490.327: 2.8935% ( 6) 00:12:01.499 15490.327 - 15609.484: 3.0961% ( 14) 00:12:01.499 15609.484 - 15728.640: 3.4867% ( 27) 00:12:01.499 15728.640 - 15847.796: 4.0365% ( 38) 00:12:01.499 15847.796 - 15966.953: 4.7743% ( 51) 00:12:01.499 15966.953 - 16086.109: 5.8449% ( 74) 00:12:01.499 16086.109 - 16205.265: 7.0168% ( 81) 00:12:01.499 16205.265 - 16324.422: 8.7240% ( 118) 00:12:01.499 16324.422 - 16443.578: 10.6337% ( 132) 00:12:01.499 16443.578 - 16562.735: 12.6736% ( 141) 00:12:01.499 16562.735 - 16681.891: 14.9161% ( 155) 00:12:01.499 16681.891 - 16801.047: 17.2598% ( 162) 00:12:01.499 16801.047 - 16920.204: 19.7049% ( 169) 00:12:01.499 16920.204 - 17039.360: 22.1644% ( 170) 00:12:01.499 17039.360 - 17158.516: 24.7251% ( 177) 00:12:01.499 17158.516 - 17277.673: 27.2859% ( 177) 00:12:01.499 17277.673 - 17396.829: 29.8032% ( 174) 00:12:01.499 17396.829 - 17515.985: 32.5087% ( 187) 00:12:01.499 17515.985 - 17635.142: 35.1997% ( 186) 00:12:01.499 17635.142 - 17754.298: 37.7315% ( 175) 00:12:01.499 17754.298 - 17873.455: 40.3501% ( 181) 00:12:01.499 17873.455 - 17992.611: 42.8964% ( 176) 00:12:01.499 17992.611 - 18111.767: 45.6742% ( 192) 00:12:01.499 18111.767 - 18230.924: 48.3073% ( 182) 00:12:01.499 18230.924 - 18350.080: 50.9115% ( 180) 00:12:01.499 18350.080 - 18469.236: 53.4144% ( 173) 00:12:01.499 18469.236 - 18588.393: 56.0909% ( 185) 00:12:01.499 18588.393 - 18707.549: 58.1597% ( 143) 00:12:01.499 18707.549 - 18826.705: 60.3443% ( 151) 00:12:01.499 18826.705 - 18945.862: 62.2541% ( 132) 00:12:01.499 18945.862 - 19065.018: 64.0770% ( 126) 00:12:01.499 19065.018 - 19184.175: 66.0301% ( 135) 00:12:01.499 19184.175 - 19303.331: 67.8819% ( 128) 00:12:01.499 19303.331 - 19422.487: 69.6181% ( 120) 00:12:01.499 19422.487 - 19541.644: 71.3976% ( 123) 00:12:01.499 19541.644 - 19660.800: 73.2205% ( 126) 00:12:01.499 19660.800 - 19779.956: 75.0000% ( 123) 00:12:01.499 19779.956 - 19899.113: 76.7506% ( 121) 00:12:01.499 19899.113 - 20018.269: 78.4433% ( 117) 00:12:01.499 20018.269 - 20137.425: 80.2517% ( 125) 00:12:01.499 20137.425 - 20256.582: 82.1181% ( 129) 00:12:01.499 20256.582 - 20375.738: 83.8686% ( 121) 00:12:01.499 20375.738 - 20494.895: 85.7060% ( 127) 00:12:01.499 20494.895 - 20614.051: 87.4711% ( 122) 00:12:01.499 20614.051 - 20733.207: 89.1927% ( 119) 00:12:01.499 20733.207 - 20852.364: 90.9433% ( 121) 00:12:01.499 20852.364 - 20971.520: 92.4913% ( 107) 00:12:01.499 20971.520 - 21090.676: 94.0104% ( 105) 00:12:01.499 21090.676 - 21209.833: 95.0666% ( 73) 00:12:01.499 21209.833 - 21328.989: 95.7610% ( 48) 00:12:01.499 21328.989 - 21448.145: 96.2095% ( 31) 00:12:01.499 21448.145 - 21567.302: 96.4554% ( 17) 00:12:01.499 21567.302 - 21686.458: 96.6725% ( 15) 00:12:01.499 21686.458 - 21805.615: 96.7882% ( 8) 00:12:01.759 21805.615 - 21924.771: 96.8895% ( 7) 00:12:01.759 21924.771 - 22043.927: 96.9329% ( 3) 00:12:01.759 22043.927 - 22163.084: 96.9618% ( 2) 00:12:01.759 22163.084 - 22282.240: 96.9907% ( 2) 00:12:01.759 22282.240 - 22401.396: 97.0197% ( 2) 00:12:01.759 22401.396 - 22520.553: 97.0486% ( 2) 00:12:01.759 22520.553 - 22639.709: 97.0775% ( 2) 00:12:01.759 22639.709 - 22758.865: 97.1065% ( 2) 00:12:01.759 22758.865 - 22878.022: 97.1209% ( 1) 00:12:01.759 22878.022 - 22997.178: 97.1644% ( 3) 00:12:01.759 22997.178 - 23116.335: 97.1933% ( 2) 00:12:01.759 23116.335 - 23235.491: 97.2367% ( 3) 00:12:01.759 23235.491 - 23354.647: 97.2656% ( 2) 00:12:01.759 23354.647 - 23473.804: 97.3090% ( 3) 00:12:01.759 23473.804 - 23592.960: 97.3524% ( 3) 00:12:01.759 23592.960 - 23712.116: 97.3814% ( 2) 00:12:01.759 23712.116 - 23831.273: 97.4248% ( 3) 00:12:01.759 23831.273 - 23950.429: 97.4682% ( 3) 00:12:01.759 23950.429 - 24069.585: 97.4971% ( 2) 00:12:01.759 24069.585 - 24188.742: 97.5405% ( 3) 00:12:01.759 24188.742 - 24307.898: 97.5839% ( 3) 00:12:01.759 24307.898 - 24427.055: 97.6128% ( 2) 00:12:01.759 24427.055 - 24546.211: 97.6562% ( 3) 00:12:01.759 24546.211 - 24665.367: 97.6997% ( 3) 00:12:01.759 24665.367 - 24784.524: 97.7286% ( 2) 00:12:01.759 24784.524 - 24903.680: 97.7720% ( 3) 00:12:01.759 24903.680 - 25022.836: 97.8154% ( 3) 00:12:01.759 25022.836 - 25141.993: 97.8443% ( 2) 00:12:01.759 25141.993 - 25261.149: 97.8877% ( 3) 00:12:01.759 25261.149 - 25380.305: 97.9167% ( 2) 00:12:01.759 25380.305 - 25499.462: 97.9601% ( 3) 00:12:01.759 25499.462 - 25618.618: 98.0035% ( 3) 00:12:01.759 25618.618 - 25737.775: 98.0324% ( 2) 00:12:01.759 25737.775 - 25856.931: 98.0758% ( 3) 00:12:01.759 25856.931 - 25976.087: 98.1047% ( 2) 00:12:01.759 25976.087 - 26095.244: 98.1481% ( 3) 00:12:01.759 27644.276 - 27763.433: 98.2205% ( 5) 00:12:01.759 27763.433 - 27882.589: 98.3507% ( 9) 00:12:01.759 27882.589 - 28001.745: 98.4664% ( 8) 00:12:01.759 28001.745 - 28120.902: 98.5822% ( 8) 00:12:01.759 28120.902 - 28240.058: 98.6979% ( 8) 00:12:01.759 28240.058 - 28359.215: 98.7703% ( 5) 00:12:01.759 28359.215 - 28478.371: 98.8137% ( 3) 00:12:01.759 28478.371 - 28597.527: 98.8571% ( 3) 00:12:01.759 28597.527 - 28716.684: 98.9005% ( 3) 00:12:01.759 28716.684 - 28835.840: 98.9583% ( 4) 00:12:01.759 28835.840 - 28954.996: 98.9873% ( 2) 00:12:01.759 28954.996 - 29074.153: 99.0307% ( 3) 00:12:01.759 29074.153 - 29193.309: 99.0741% ( 3) 00:12:01.759 29193.309 - 29312.465: 99.1175% ( 3) 00:12:01.759 29312.465 - 29431.622: 99.1464% ( 2) 00:12:01.759 29431.622 - 29550.778: 99.1898% ( 3) 00:12:01.759 29550.778 - 29669.935: 99.2332% ( 3) 00:12:01.759 29669.935 - 29789.091: 99.2766% ( 3) 00:12:01.759 29789.091 - 29908.247: 99.3200% ( 3) 00:12:01.759 29908.247 - 30027.404: 99.3490% ( 2) 00:12:01.759 30027.404 - 30146.560: 99.3924% ( 3) 00:12:01.759 30146.560 - 30265.716: 99.4358% ( 3) 00:12:01.759 30265.716 - 30384.873: 99.4647% ( 2) 00:12:01.759 30384.873 - 30504.029: 99.5081% ( 3) 00:12:01.759 30504.029 - 30742.342: 99.5949% ( 6) 00:12:01.759 30742.342 - 30980.655: 99.6672% ( 5) 00:12:01.759 30980.655 - 31218.967: 99.7396% ( 5) 00:12:01.759 31218.967 - 31457.280: 99.8119% ( 5) 00:12:01.759 31457.280 - 31695.593: 99.8987% ( 6) 00:12:01.759 31695.593 - 31933.905: 99.9855% ( 6) 00:12:01.759 31933.905 - 32172.218: 100.0000% ( 1) 00:12:01.759 00:12:01.759 ************************************ 00:12:01.759 END TEST nvme_perf 00:12:01.759 ************************************ 00:12:01.759 12:33:10 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:01.759 00:12:01.759 real 0m2.995s 00:12:01.759 user 0m2.565s 00:12:01.759 sys 0m0.313s 00:12:01.759 12:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.759 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:12:01.759 12:33:10 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:01.759 12:33:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:01.759 12:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:01.759 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:12:01.759 ************************************ 00:12:01.759 START TEST nvme_hello_world 00:12:01.759 ************************************ 00:12:01.759 12:33:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:02.016 Initializing NVMe Controllers 00:12:02.016 Attached to 0000:00:06.0 00:12:02.016 Namespace ID: 1 size: 6GB 00:12:02.016 Attached to 0000:00:07.0 00:12:02.016 Namespace ID: 1 size: 5GB 00:12:02.016 Attached to 0000:00:09.0 00:12:02.016 Namespace ID: 1 size: 1GB 00:12:02.016 Attached to 0000:00:08.0 00:12:02.016 Namespace ID: 1 size: 4GB 00:12:02.016 Namespace ID: 2 size: 4GB 00:12:02.016 Namespace ID: 3 size: 4GB 00:12:02.016 Initialization complete. 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 INFO: using host memory buffer for IO 00:12:02.016 Hello world! 00:12:02.016 ************************************ 00:12:02.016 END TEST nvme_hello_world 00:12:02.016 ************************************ 00:12:02.016 00:12:02.016 real 0m0.394s 00:12:02.016 user 0m0.200s 00:12:02.016 sys 0m0.149s 00:12:02.016 12:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.016 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.016 12:33:10 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:02.016 12:33:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:02.016 12:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.016 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.016 ************************************ 00:12:02.016 START TEST nvme_sgl 00:12:02.016 ************************************ 00:12:02.016 12:33:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:02.273 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:12:02.273 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:12:02.531 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:12:02.531 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:12:02.531 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:12:02.531 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:12:02.531 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:12:02.531 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:12:02.531 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:12:02.789 NVMe Readv/Writev Request test 00:12:02.789 Attached to 0000:00:06.0 00:12:02.789 Attached to 0000:00:07.0 00:12:02.789 Attached to 0000:00:09.0 00:12:02.789 Attached to 0000:00:08.0 00:12:02.789 0000:00:06.0: build_io_request_2 test passed 00:12:02.789 0000:00:06.0: build_io_request_4 test passed 00:12:02.789 0000:00:06.0: build_io_request_5 test passed 00:12:02.789 0000:00:06.0: build_io_request_6 test passed 00:12:02.789 0000:00:06.0: build_io_request_7 test passed 00:12:02.789 0000:00:06.0: build_io_request_10 test passed 00:12:02.789 0000:00:07.0: build_io_request_2 test passed 00:12:02.789 0000:00:07.0: build_io_request_4 test passed 00:12:02.789 0000:00:07.0: build_io_request_5 test passed 00:12:02.789 0000:00:07.0: build_io_request_6 test passed 00:12:02.789 0000:00:07.0: build_io_request_7 test passed 00:12:02.789 0000:00:07.0: build_io_request_10 test passed 00:12:02.789 Cleaning up... 00:12:02.789 ************************************ 00:12:02.789 END TEST nvme_sgl 00:12:02.789 ************************************ 00:12:02.789 00:12:02.789 real 0m0.542s 00:12:02.789 user 0m0.349s 00:12:02.789 sys 0m0.146s 00:12:02.789 12:33:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.789 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:12:02.789 12:33:11 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:02.789 12:33:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:02.790 12:33:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.790 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:12:02.790 ************************************ 00:12:02.790 START TEST nvme_e2edp 00:12:02.790 ************************************ 00:12:02.790 12:33:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:03.049 NVMe Write/Read with End-to-End data protection test 00:12:03.049 Attached to 0000:00:06.0 00:12:03.049 Attached to 0000:00:07.0 00:12:03.049 Attached to 0000:00:09.0 00:12:03.049 Attached to 0000:00:08.0 00:12:03.049 Cleaning up... 00:12:03.049 ************************************ 00:12:03.049 END TEST nvme_e2edp 00:12:03.049 ************************************ 00:12:03.049 00:12:03.049 real 0m0.327s 00:12:03.049 user 0m0.101s 00:12:03.049 sys 0m0.161s 00:12:03.049 12:33:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.049 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:12:03.049 12:33:11 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:03.049 12:33:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:03.049 12:33:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.049 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:12:03.049 ************************************ 00:12:03.049 START TEST nvme_reserve 00:12:03.049 ************************************ 00:12:03.049 12:33:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:03.307 ===================================================== 00:12:03.307 NVMe Controller at PCI bus 0, device 6, function 0 00:12:03.307 ===================================================== 00:12:03.307 Reservations: Not Supported 00:12:03.307 ===================================================== 00:12:03.307 NVMe Controller at PCI bus 0, device 7, function 0 00:12:03.307 ===================================================== 00:12:03.307 Reservations: Not Supported 00:12:03.307 ===================================================== 00:12:03.307 NVMe Controller at PCI bus 0, device 9, function 0 00:12:03.307 ===================================================== 00:12:03.307 Reservations: Not Supported 00:12:03.307 ===================================================== 00:12:03.307 NVMe Controller at PCI bus 0, device 8, function 0 00:12:03.307 ===================================================== 00:12:03.307 Reservations: Not Supported 00:12:03.307 Reservation test passed 00:12:03.307 ************************************ 00:12:03.307 END TEST nvme_reserve 00:12:03.307 ************************************ 00:12:03.307 00:12:03.307 real 0m0.278s 00:12:03.307 user 0m0.093s 00:12:03.307 sys 0m0.133s 00:12:03.307 12:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.307 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:12:03.307 12:33:12 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:03.307 12:33:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:03.307 12:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.307 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:12:03.307 ************************************ 00:12:03.307 START TEST nvme_err_injection 00:12:03.307 ************************************ 00:12:03.307 12:33:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:03.874 NVMe Error Injection test 00:12:03.874 Attached to 0000:00:06.0 00:12:03.874 Attached to 0000:00:07.0 00:12:03.874 Attached to 0000:00:09.0 00:12:03.874 Attached to 0000:00:08.0 00:12:03.874 0000:00:07.0: get features failed as expected 00:12:03.874 0000:00:09.0: get features failed as expected 00:12:03.874 0000:00:08.0: get features failed as expected 00:12:03.874 0000:00:06.0: get features failed as expected 00:12:03.874 0000:00:07.0: get features successfully as expected 00:12:03.874 0000:00:09.0: get features successfully as expected 00:12:03.874 0000:00:08.0: get features successfully as expected 00:12:03.874 0000:00:06.0: get features successfully as expected 00:12:03.874 0000:00:06.0: read failed as expected 00:12:03.874 0000:00:07.0: read failed as expected 00:12:03.874 0000:00:09.0: read failed as expected 00:12:03.874 0000:00:08.0: read failed as expected 00:12:03.874 0000:00:06.0: read successfully as expected 00:12:03.874 0000:00:07.0: read successfully as expected 00:12:03.874 0000:00:09.0: read successfully as expected 00:12:03.874 0000:00:08.0: read successfully as expected 00:12:03.874 Cleaning up... 00:12:03.874 ************************************ 00:12:03.874 END TEST nvme_err_injection 00:12:03.874 ************************************ 00:12:03.874 00:12:03.874 real 0m0.415s 00:12:03.874 user 0m0.217s 00:12:03.874 sys 0m0.148s 00:12:03.874 12:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.874 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:12:03.874 12:33:12 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:03.874 12:33:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:12:03.874 12:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.874 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:12:03.874 ************************************ 00:12:03.874 START TEST nvme_overhead 00:12:03.874 ************************************ 00:12:03.874 12:33:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:05.248 Initializing NVMe Controllers 00:12:05.248 Attached to 0000:00:06.0 00:12:05.248 Attached to 0000:00:07.0 00:12:05.248 Attached to 0000:00:09.0 00:12:05.248 Attached to 0000:00:08.0 00:12:05.248 Initialization complete. Launching workers. 00:12:05.248 submit (in ns) avg, min, max = 15422.3, 12871.4, 119445.5 00:12:05.248 complete (in ns) avg, min, max = 9978.4, 8656.8, 84154.1 00:12:05.248 00:12:05.248 Submit histogram 00:12:05.248 ================ 00:12:05.248 Range in us Cumulative Count 00:12:05.248 12.858 - 12.916: 0.0102% ( 1) 00:12:05.248 12.916 - 12.975: 0.0203% ( 1) 00:12:05.248 12.975 - 13.033: 0.1625% ( 14) 00:12:05.248 13.033 - 13.091: 0.5991% ( 43) 00:12:05.248 13.091 - 13.149: 1.2287% ( 62) 00:12:05.248 13.149 - 13.207: 2.0309% ( 79) 00:12:05.248 13.207 - 13.265: 2.8940% ( 85) 00:12:05.248 13.265 - 13.324: 3.7470% ( 84) 00:12:05.248 13.324 - 13.382: 4.3562% ( 60) 00:12:05.248 13.382 - 13.440: 4.9350% ( 57) 00:12:05.248 13.440 - 13.498: 5.2600% ( 32) 00:12:05.248 13.498 - 13.556: 5.4427% ( 18) 00:12:05.248 13.556 - 13.615: 5.6357% ( 19) 00:12:05.248 13.615 - 13.673: 5.7271% ( 9) 00:12:05.248 13.673 - 13.731: 5.8692% ( 14) 00:12:05.248 13.731 - 13.789: 6.0621% ( 19) 00:12:05.248 13.789 - 13.847: 6.4988% ( 43) 00:12:05.248 13.847 - 13.905: 7.5345% ( 102) 00:12:05.248 13.905 - 13.964: 8.9866% ( 143) 00:12:05.248 13.964 - 14.022: 11.6470% ( 262) 00:12:05.248 14.022 - 14.080: 15.5463% ( 384) 00:12:05.248 14.080 - 14.138: 20.8266% ( 520) 00:12:05.248 14.138 - 14.196: 27.3558% ( 643) 00:12:05.248 14.196 - 14.255: 34.9716% ( 750) 00:12:05.248 14.255 - 14.313: 42.0491% ( 697) 00:12:05.248 14.313 - 14.371: 48.4870% ( 634) 00:12:05.248 14.371 - 14.429: 54.4070% ( 583) 00:12:05.248 14.429 - 14.487: 58.6007% ( 413) 00:12:05.248 14.487 - 14.545: 61.8400% ( 319) 00:12:05.248 14.545 - 14.604: 64.2262% ( 235) 00:12:05.248 14.604 - 14.662: 66.1353% ( 188) 00:12:05.248 14.662 - 14.720: 67.6178% ( 146) 00:12:05.248 14.720 - 14.778: 69.0800% ( 144) 00:12:05.248 14.778 - 14.836: 70.3595% ( 126) 00:12:05.248 14.836 - 14.895: 71.6186% ( 124) 00:12:05.248 14.895 - 15.011: 73.5276% ( 188) 00:12:05.248 15.011 - 15.127: 74.7157% ( 117) 00:12:05.248 15.127 - 15.244: 75.6397% ( 91) 00:12:05.248 15.244 - 15.360: 76.1982% ( 55) 00:12:05.248 15.360 - 15.476: 76.6145% ( 41) 00:12:05.248 15.476 - 15.593: 77.0816% ( 46) 00:12:05.248 15.593 - 15.709: 77.4980% ( 41) 00:12:05.248 15.709 - 15.825: 77.7417% ( 24) 00:12:05.248 15.825 - 15.942: 78.0565% ( 31) 00:12:05.248 15.942 - 16.058: 78.1377% ( 8) 00:12:05.248 16.058 - 16.175: 78.2291% ( 9) 00:12:05.248 16.175 - 16.291: 78.2697% ( 4) 00:12:05.248 16.291 - 16.407: 78.3205% ( 5) 00:12:05.248 16.407 - 16.524: 78.3814% ( 6) 00:12:05.248 16.524 - 16.640: 78.4423% ( 6) 00:12:05.248 16.640 - 16.756: 78.4728% ( 3) 00:12:05.248 16.756 - 16.873: 78.5236% ( 5) 00:12:05.248 16.873 - 16.989: 79.2242% ( 69) 00:12:05.248 16.989 - 17.105: 81.2855% ( 203) 00:12:05.248 17.105 - 17.222: 84.4334% ( 310) 00:12:05.248 17.222 - 17.338: 86.9314% ( 246) 00:12:05.248 17.338 - 17.455: 88.5256% ( 157) 00:12:05.248 17.455 - 17.571: 89.3582% ( 82) 00:12:05.248 17.571 - 17.687: 90.1807% ( 81) 00:12:05.248 17.687 - 17.804: 90.7392% ( 55) 00:12:05.248 17.804 - 17.920: 91.1759% ( 43) 00:12:05.248 17.920 - 18.036: 91.5820% ( 40) 00:12:05.248 18.036 - 18.153: 91.9070% ( 32) 00:12:05.248 18.153 - 18.269: 92.1304% ( 22) 00:12:05.248 18.269 - 18.385: 92.3233% ( 19) 00:12:05.248 18.385 - 18.502: 92.5061% ( 18) 00:12:05.248 18.502 - 18.618: 92.6076% ( 10) 00:12:05.248 18.618 - 18.735: 92.7396% ( 13) 00:12:05.248 18.735 - 18.851: 92.8920% ( 15) 00:12:05.248 18.851 - 18.967: 93.0747% ( 18) 00:12:05.248 18.967 - 19.084: 93.3286% ( 25) 00:12:05.248 19.084 - 19.200: 93.4809% ( 15) 00:12:05.248 19.200 - 19.316: 93.6535% ( 17) 00:12:05.248 19.316 - 19.433: 93.7449% ( 9) 00:12:05.248 19.433 - 19.549: 93.9277% ( 18) 00:12:05.248 19.549 - 19.665: 94.0597% ( 13) 00:12:05.248 19.665 - 19.782: 94.1613% ( 10) 00:12:05.248 19.782 - 19.898: 94.2526% ( 9) 00:12:05.248 19.898 - 20.015: 94.4354% ( 18) 00:12:05.248 20.015 - 20.131: 94.5167% ( 8) 00:12:05.248 20.131 - 20.247: 94.6690% ( 15) 00:12:05.248 20.247 - 20.364: 94.7604% ( 9) 00:12:05.248 20.364 - 20.480: 94.8822% ( 12) 00:12:05.248 20.480 - 20.596: 94.9634% ( 8) 00:12:05.248 20.596 - 20.713: 95.0447% ( 8) 00:12:05.248 20.713 - 20.829: 95.2071% ( 16) 00:12:05.248 20.829 - 20.945: 95.3392% ( 13) 00:12:05.248 20.945 - 21.062: 95.4305% ( 9) 00:12:05.248 21.062 - 21.178: 95.5524% ( 12) 00:12:05.248 21.178 - 21.295: 95.6539% ( 10) 00:12:05.248 21.295 - 21.411: 95.7453% ( 9) 00:12:05.248 21.411 - 21.527: 95.8469% ( 10) 00:12:05.248 21.527 - 21.644: 95.9281% ( 8) 00:12:05.248 21.644 - 21.760: 96.0093% ( 8) 00:12:05.248 21.760 - 21.876: 96.1109% ( 10) 00:12:05.248 21.876 - 21.993: 96.1921% ( 8) 00:12:05.248 21.993 - 22.109: 96.3140% ( 12) 00:12:05.248 22.109 - 22.225: 96.4358% ( 12) 00:12:05.248 22.225 - 22.342: 96.5881% ( 15) 00:12:05.248 22.342 - 22.458: 96.6795% ( 9) 00:12:05.248 22.458 - 22.575: 96.7709% ( 9) 00:12:05.248 22.575 - 22.691: 96.9232% ( 15) 00:12:05.248 22.691 - 22.807: 96.9639% ( 4) 00:12:05.248 22.807 - 22.924: 96.9943% ( 3) 00:12:05.248 22.924 - 23.040: 97.0654% ( 7) 00:12:05.248 23.040 - 23.156: 97.1060% ( 4) 00:12:05.248 23.156 - 23.273: 97.1669% ( 6) 00:12:05.248 23.273 - 23.389: 97.2279% ( 6) 00:12:05.248 23.389 - 23.505: 97.3396% ( 11) 00:12:05.248 23.505 - 23.622: 97.4005% ( 6) 00:12:05.248 23.622 - 23.738: 97.5325% ( 13) 00:12:05.248 23.738 - 23.855: 97.6442% ( 11) 00:12:05.248 23.855 - 23.971: 97.7051% ( 6) 00:12:05.248 23.971 - 24.087: 97.7559% ( 5) 00:12:05.248 24.087 - 24.204: 97.8371% ( 8) 00:12:05.248 24.204 - 24.320: 97.9082% ( 7) 00:12:05.248 24.320 - 24.436: 97.9691% ( 6) 00:12:05.248 24.436 - 24.553: 98.0707% ( 10) 00:12:05.248 24.553 - 24.669: 98.1418% ( 7) 00:12:05.248 24.669 - 24.785: 98.1621% ( 2) 00:12:05.248 24.785 - 24.902: 98.2230% ( 6) 00:12:05.248 24.902 - 25.018: 98.2433% ( 2) 00:12:05.248 25.018 - 25.135: 98.2738% ( 3) 00:12:05.248 25.135 - 25.251: 98.3144% ( 4) 00:12:05.248 25.251 - 25.367: 98.3448% ( 3) 00:12:05.248 25.367 - 25.484: 98.3855% ( 4) 00:12:05.248 25.484 - 25.600: 98.4464% ( 6) 00:12:05.248 25.600 - 25.716: 98.5073% ( 6) 00:12:05.248 25.716 - 25.833: 98.5479% ( 4) 00:12:05.248 25.833 - 25.949: 98.5987% ( 5) 00:12:05.249 25.949 - 26.065: 98.6393% ( 4) 00:12:05.249 26.065 - 26.182: 98.6901% ( 5) 00:12:05.249 26.182 - 26.298: 98.7409% ( 5) 00:12:05.249 26.298 - 26.415: 98.7815% ( 4) 00:12:05.249 26.415 - 26.531: 98.8119% ( 3) 00:12:05.249 26.531 - 26.647: 98.8323% ( 2) 00:12:05.249 26.647 - 26.764: 98.8526% ( 2) 00:12:05.249 26.764 - 26.880: 98.8729% ( 2) 00:12:05.249 26.996 - 27.113: 98.9135% ( 4) 00:12:05.249 27.113 - 27.229: 98.9643% ( 5) 00:12:05.249 27.229 - 27.345: 98.9846% ( 2) 00:12:05.249 27.345 - 27.462: 99.0049% ( 2) 00:12:05.249 27.462 - 27.578: 99.0353% ( 3) 00:12:05.249 27.578 - 27.695: 99.0658% ( 3) 00:12:05.249 27.695 - 27.811: 99.0760% ( 1) 00:12:05.249 27.811 - 27.927: 99.1369% ( 6) 00:12:05.249 27.927 - 28.044: 99.1572% ( 2) 00:12:05.249 28.044 - 28.160: 99.1673% ( 1) 00:12:05.249 28.160 - 28.276: 99.1978% ( 3) 00:12:05.249 28.276 - 28.393: 99.2080% ( 1) 00:12:05.249 28.393 - 28.509: 99.2587% ( 5) 00:12:05.249 28.509 - 28.625: 99.2892% ( 3) 00:12:05.249 28.625 - 28.742: 99.2994% ( 1) 00:12:05.249 28.742 - 28.858: 99.3197% ( 2) 00:12:05.249 28.975 - 29.091: 99.3501% ( 3) 00:12:05.249 29.091 - 29.207: 99.3704% ( 2) 00:12:05.249 29.207 - 29.324: 99.3806% ( 1) 00:12:05.249 29.324 - 29.440: 99.4110% ( 3) 00:12:05.249 29.440 - 29.556: 99.4314% ( 2) 00:12:05.249 29.556 - 29.673: 99.4618% ( 3) 00:12:05.249 29.673 - 29.789: 99.4720% ( 1) 00:12:05.249 29.789 - 30.022: 99.5024% ( 3) 00:12:05.249 30.022 - 30.255: 99.5126% ( 1) 00:12:05.249 30.255 - 30.487: 99.5532% ( 4) 00:12:05.249 30.487 - 30.720: 99.5837% ( 3) 00:12:05.249 30.720 - 30.953: 99.6141% ( 3) 00:12:05.249 30.953 - 31.185: 99.6344% ( 2) 00:12:05.249 31.185 - 31.418: 99.6548% ( 2) 00:12:05.249 31.418 - 31.651: 99.6649% ( 1) 00:12:05.249 31.651 - 31.884: 99.6751% ( 1) 00:12:05.249 31.884 - 32.116: 99.6954% ( 2) 00:12:05.249 32.116 - 32.349: 99.7157% ( 2) 00:12:05.249 32.349 - 32.582: 99.7360% ( 2) 00:12:05.249 32.582 - 32.815: 99.7766% ( 4) 00:12:05.249 33.280 - 33.513: 99.7868% ( 1) 00:12:05.249 33.978 - 34.211: 99.7969% ( 1) 00:12:05.249 34.444 - 34.676: 99.8071% ( 1) 00:12:05.249 34.676 - 34.909: 99.8172% ( 1) 00:12:05.249 35.142 - 35.375: 99.8274% ( 1) 00:12:05.249 35.375 - 35.607: 99.8375% ( 1) 00:12:05.249 36.538 - 36.771: 99.8781% ( 4) 00:12:05.249 37.004 - 37.236: 99.8883% ( 1) 00:12:05.249 37.236 - 37.469: 99.8985% ( 1) 00:12:05.249 37.469 - 37.702: 99.9086% ( 1) 00:12:05.249 40.029 - 40.262: 99.9188% ( 1) 00:12:05.249 43.753 - 43.985: 99.9289% ( 1) 00:12:05.249 43.985 - 44.218: 99.9391% ( 1) 00:12:05.249 46.313 - 46.545: 99.9492% ( 1) 00:12:05.249 56.087 - 56.320: 99.9594% ( 1) 00:12:05.249 57.949 - 58.182: 99.9695% ( 1) 00:12:05.249 58.880 - 59.113: 99.9797% ( 1) 00:12:05.249 104.727 - 105.193: 99.9898% ( 1) 00:12:05.249 119.156 - 120.087: 100.0000% ( 1) 00:12:05.249 00:12:05.249 Complete histogram 00:12:05.249 ================== 00:12:05.249 Range in us Cumulative Count 00:12:05.249 8.611 - 8.669: 0.0203% ( 2) 00:12:05.249 8.669 - 8.727: 0.0305% ( 1) 00:12:05.249 8.727 - 8.785: 0.0711% ( 4) 00:12:05.249 8.785 - 8.844: 0.3656% ( 29) 00:12:05.249 8.844 - 8.902: 1.9395% ( 155) 00:12:05.249 8.902 - 8.960: 7.9305% ( 590) 00:12:05.249 8.960 - 9.018: 19.6994% ( 1159) 00:12:05.249 9.018 - 9.076: 34.3623% ( 1444) 00:12:05.249 9.076 - 9.135: 46.9232% ( 1237) 00:12:05.249 9.135 - 9.193: 56.2246% ( 916) 00:12:05.249 9.193 - 9.251: 61.9720% ( 566) 00:12:05.249 9.251 - 9.309: 66.2165% ( 418) 00:12:05.249 9.309 - 9.367: 68.9582% ( 270) 00:12:05.249 9.367 - 9.425: 70.7758% ( 179) 00:12:05.249 9.425 - 9.484: 72.1466% ( 135) 00:12:05.249 9.484 - 9.542: 72.7864% ( 63) 00:12:05.249 9.542 - 9.600: 73.0097% ( 22) 00:12:05.249 9.600 - 9.658: 73.2027% ( 19) 00:12:05.249 9.658 - 9.716: 73.3956% ( 19) 00:12:05.249 9.716 - 9.775: 73.5581% ( 16) 00:12:05.249 9.775 - 9.833: 73.8424% ( 28) 00:12:05.249 9.833 - 9.891: 74.3400% ( 49) 00:12:05.249 9.891 - 9.949: 75.0000% ( 65) 00:12:05.249 9.949 - 10.007: 75.7108% ( 70) 00:12:05.249 10.007 - 10.065: 76.5130% ( 79) 00:12:05.249 10.065 - 10.124: 77.1019% ( 58) 00:12:05.249 10.124 - 10.182: 77.6604% ( 55) 00:12:05.249 10.182 - 10.240: 78.0565% ( 39) 00:12:05.249 10.240 - 10.298: 78.2494% ( 19) 00:12:05.249 10.298 - 10.356: 78.4423% ( 19) 00:12:05.249 10.356 - 10.415: 78.5743% ( 13) 00:12:05.249 10.415 - 10.473: 78.7774% ( 20) 00:12:05.249 10.473 - 10.531: 79.0719% ( 29) 00:12:05.249 10.531 - 10.589: 79.3258% ( 25) 00:12:05.249 10.589 - 10.647: 79.4476% ( 12) 00:12:05.249 10.647 - 10.705: 79.5796% ( 13) 00:12:05.249 10.705 - 10.764: 79.6608% ( 8) 00:12:05.249 10.764 - 10.822: 79.7725% ( 11) 00:12:05.249 10.822 - 10.880: 79.8538% ( 8) 00:12:05.249 10.880 - 10.938: 79.9147% ( 6) 00:12:05.249 10.938 - 10.996: 79.9350% ( 2) 00:12:05.249 10.996 - 11.055: 80.0264% ( 9) 00:12:05.249 11.055 - 11.113: 80.1279% ( 10) 00:12:05.249 11.113 - 11.171: 80.7677% ( 63) 00:12:05.249 11.171 - 11.229: 82.8493% ( 205) 00:12:05.249 11.229 - 11.287: 85.5402% ( 265) 00:12:05.249 11.287 - 11.345: 88.0077% ( 243) 00:12:05.249 11.345 - 11.404: 89.7949% ( 176) 00:12:05.249 11.404 - 11.462: 91.0134% ( 120) 00:12:05.249 11.462 - 11.520: 91.7242% ( 70) 00:12:05.249 11.520 - 11.578: 92.0593% ( 33) 00:12:05.249 11.578 - 11.636: 92.2827% ( 22) 00:12:05.249 11.636 - 11.695: 92.4858% ( 20) 00:12:05.249 11.695 - 11.753: 92.5975% ( 11) 00:12:05.249 11.753 - 11.811: 92.7498% ( 15) 00:12:05.249 11.811 - 11.869: 92.8412% ( 9) 00:12:05.249 11.869 - 11.927: 92.9630% ( 12) 00:12:05.249 11.927 - 11.985: 93.1560% ( 19) 00:12:05.249 11.985 - 12.044: 93.2981% ( 14) 00:12:05.249 12.044 - 12.102: 93.4301% ( 13) 00:12:05.249 12.102 - 12.160: 93.5520% ( 12) 00:12:05.249 12.160 - 12.218: 93.6129% ( 6) 00:12:05.249 12.218 - 12.276: 93.7652% ( 15) 00:12:05.249 12.276 - 12.335: 93.9582% ( 19) 00:12:05.249 12.335 - 12.393: 94.1511% ( 19) 00:12:05.249 12.393 - 12.451: 94.2831% ( 13) 00:12:05.249 12.451 - 12.509: 94.4963% ( 21) 00:12:05.249 12.509 - 12.567: 94.5979% ( 10) 00:12:05.249 12.567 - 12.625: 94.6588% ( 6) 00:12:05.249 12.625 - 12.684: 94.6791% ( 2) 00:12:05.249 12.684 - 12.742: 94.7400% ( 6) 00:12:05.249 12.742 - 12.800: 94.7908% ( 5) 00:12:05.249 12.800 - 12.858: 94.8517% ( 6) 00:12:05.249 12.858 - 12.916: 94.8924% ( 4) 00:12:05.249 12.916 - 12.975: 94.9634% ( 7) 00:12:05.249 12.975 - 13.033: 95.0548% ( 9) 00:12:05.249 13.033 - 13.091: 95.0751% ( 2) 00:12:05.249 13.091 - 13.149: 95.0955% ( 2) 00:12:05.249 13.149 - 13.207: 95.1361% ( 4) 00:12:05.249 13.207 - 13.265: 95.1767% ( 4) 00:12:05.249 13.265 - 13.324: 95.2071% ( 3) 00:12:05.249 13.324 - 13.382: 95.2173% ( 1) 00:12:05.249 13.382 - 13.440: 95.2884% ( 7) 00:12:05.249 13.440 - 13.498: 95.2985% ( 1) 00:12:05.249 13.498 - 13.556: 95.3087% ( 1) 00:12:05.249 13.556 - 13.615: 95.3188% ( 1) 00:12:05.249 13.615 - 13.673: 95.3595% ( 4) 00:12:05.249 13.673 - 13.731: 95.4102% ( 5) 00:12:05.249 13.731 - 13.789: 95.4407% ( 3) 00:12:05.249 13.789 - 13.847: 95.4610% ( 2) 00:12:05.249 13.847 - 13.905: 95.5118% ( 5) 00:12:05.249 13.905 - 13.964: 95.5727% ( 6) 00:12:05.249 13.964 - 14.022: 95.6133% ( 4) 00:12:05.249 14.022 - 14.080: 95.6539% ( 4) 00:12:05.249 14.080 - 14.138: 95.6946% ( 4) 00:12:05.249 14.138 - 14.196: 95.7352% ( 4) 00:12:05.249 14.255 - 14.313: 95.7758% ( 4) 00:12:05.249 14.313 - 14.371: 95.7961% ( 2) 00:12:05.249 14.371 - 14.429: 95.8672% ( 7) 00:12:05.249 14.429 - 14.487: 95.8976% ( 3) 00:12:05.249 14.487 - 14.545: 95.9281% ( 3) 00:12:05.249 14.545 - 14.604: 95.9586% ( 3) 00:12:05.249 14.604 - 14.662: 95.9687% ( 1) 00:12:05.249 14.662 - 14.720: 95.9890% ( 2) 00:12:05.249 14.720 - 14.778: 96.0195% ( 3) 00:12:05.249 14.778 - 14.836: 96.0500% ( 3) 00:12:05.249 14.836 - 14.895: 96.0906% ( 4) 00:12:05.249 14.895 - 15.011: 96.1617% ( 7) 00:12:05.249 15.011 - 15.127: 96.2327% ( 7) 00:12:05.249 15.127 - 15.244: 96.3546% ( 12) 00:12:05.249 15.244 - 15.360: 96.4663% ( 11) 00:12:05.249 15.360 - 15.476: 96.5678% ( 10) 00:12:05.249 15.476 - 15.593: 96.6288% ( 6) 00:12:05.249 15.593 - 15.709: 96.7100% ( 8) 00:12:05.250 15.709 - 15.825: 96.7608% ( 5) 00:12:05.250 15.825 - 15.942: 96.8725% ( 11) 00:12:05.250 15.942 - 16.058: 96.9435% ( 7) 00:12:05.250 16.058 - 16.175: 97.0045% ( 6) 00:12:05.250 16.175 - 16.291: 97.0857% ( 8) 00:12:05.250 16.291 - 16.407: 97.1872% ( 10) 00:12:05.250 16.407 - 16.524: 97.2685% ( 8) 00:12:05.250 16.524 - 16.640: 97.4106% ( 14) 00:12:05.250 16.640 - 16.756: 97.5528% ( 14) 00:12:05.250 16.756 - 16.873: 97.6645% ( 11) 00:12:05.250 16.873 - 16.989: 97.7559% ( 9) 00:12:05.250 16.989 - 17.105: 97.8168% ( 6) 00:12:05.250 17.105 - 17.222: 97.8981% ( 8) 00:12:05.250 17.222 - 17.338: 97.9488% ( 5) 00:12:05.250 17.338 - 17.455: 98.0504% ( 10) 00:12:05.250 17.455 - 17.571: 98.1418% ( 9) 00:12:05.250 17.571 - 17.687: 98.2128% ( 7) 00:12:05.250 17.687 - 17.804: 98.2433% ( 3) 00:12:05.250 17.804 - 17.920: 98.2738% ( 3) 00:12:05.250 17.920 - 18.036: 98.3245% ( 5) 00:12:05.250 18.036 - 18.153: 98.3652% ( 4) 00:12:05.250 18.153 - 18.269: 98.3956% ( 3) 00:12:05.250 18.269 - 18.385: 98.4464% ( 5) 00:12:05.250 18.502 - 18.618: 98.5073% ( 6) 00:12:05.250 18.618 - 18.735: 98.5682% ( 6) 00:12:05.250 18.735 - 18.851: 98.6089% ( 4) 00:12:05.250 18.851 - 18.967: 98.6799% ( 7) 00:12:05.250 18.967 - 19.084: 98.7510% ( 7) 00:12:05.250 19.084 - 19.200: 98.8119% ( 6) 00:12:05.250 19.200 - 19.316: 98.8627% ( 5) 00:12:05.250 19.316 - 19.433: 98.8932% ( 3) 00:12:05.250 19.433 - 19.549: 98.9338% ( 4) 00:12:05.250 19.549 - 19.665: 98.9643% ( 3) 00:12:05.250 19.665 - 19.782: 99.0049% ( 4) 00:12:05.250 19.782 - 19.898: 99.0252% ( 2) 00:12:05.250 19.898 - 20.015: 99.1166% ( 9) 00:12:05.250 20.015 - 20.131: 99.1470% ( 3) 00:12:05.250 20.131 - 20.247: 99.1877% ( 4) 00:12:05.250 20.247 - 20.364: 99.2587% ( 7) 00:12:05.250 20.364 - 20.480: 99.2994% ( 4) 00:12:05.250 20.480 - 20.596: 99.3197% ( 2) 00:12:05.250 20.596 - 20.713: 99.3298% ( 1) 00:12:05.250 20.713 - 20.829: 99.3501% ( 2) 00:12:05.250 20.829 - 20.945: 99.3603% ( 1) 00:12:05.250 20.945 - 21.062: 99.3806% ( 2) 00:12:05.250 21.062 - 21.178: 99.4110% ( 3) 00:12:05.250 21.178 - 21.295: 99.4212% ( 1) 00:12:05.250 21.295 - 21.411: 99.4415% ( 2) 00:12:05.250 21.411 - 21.527: 99.4517% ( 1) 00:12:05.250 21.527 - 21.644: 99.4618% ( 1) 00:12:05.250 21.644 - 21.760: 99.4720% ( 1) 00:12:05.250 21.760 - 21.876: 99.4821% ( 1) 00:12:05.250 21.876 - 21.993: 99.5024% ( 2) 00:12:05.250 21.993 - 22.109: 99.5227% ( 2) 00:12:05.250 22.109 - 22.225: 99.5532% ( 3) 00:12:05.250 22.575 - 22.691: 99.5634% ( 1) 00:12:05.250 22.691 - 22.807: 99.5735% ( 1) 00:12:05.250 22.807 - 22.924: 99.5837% ( 1) 00:12:05.250 22.924 - 23.040: 99.6040% ( 2) 00:12:05.250 23.040 - 23.156: 99.6141% ( 1) 00:12:05.250 23.273 - 23.389: 99.6344% ( 2) 00:12:05.250 23.505 - 23.622: 99.6548% ( 2) 00:12:05.250 23.622 - 23.738: 99.6751% ( 2) 00:12:05.250 23.738 - 23.855: 99.6954% ( 2) 00:12:05.250 24.087 - 24.204: 99.7157% ( 2) 00:12:05.250 24.785 - 24.902: 99.7258% ( 1) 00:12:05.250 24.902 - 25.018: 99.7360% ( 1) 00:12:05.250 25.018 - 25.135: 99.7563% ( 2) 00:12:05.250 25.949 - 26.065: 99.7665% ( 1) 00:12:05.250 26.065 - 26.182: 99.7868% ( 2) 00:12:05.250 26.298 - 26.415: 99.7969% ( 1) 00:12:05.250 26.880 - 26.996: 99.8071% ( 1) 00:12:05.250 26.996 - 27.113: 99.8172% ( 1) 00:12:05.250 27.113 - 27.229: 99.8274% ( 1) 00:12:05.250 27.811 - 27.927: 99.8375% ( 1) 00:12:05.250 28.276 - 28.393: 99.8477% ( 1) 00:12:05.250 28.625 - 28.742: 99.8578% ( 1) 00:12:05.250 28.742 - 28.858: 99.8680% ( 1) 00:12:05.250 28.858 - 28.975: 99.8883% ( 2) 00:12:05.250 30.022 - 30.255: 99.8985% ( 1) 00:12:05.250 31.418 - 31.651: 99.9086% ( 1) 00:12:05.250 32.116 - 32.349: 99.9188% ( 1) 00:12:05.250 35.142 - 35.375: 99.9289% ( 1) 00:12:05.250 39.564 - 39.796: 99.9391% ( 1) 00:12:05.250 40.495 - 40.727: 99.9492% ( 1) 00:12:05.250 49.804 - 50.036: 99.9594% ( 1) 00:12:05.250 58.182 - 58.415: 99.9695% ( 1) 00:12:05.250 59.578 - 60.044: 99.9797% ( 1) 00:12:05.250 74.007 - 74.473: 99.9898% ( 1) 00:12:05.250 83.782 - 84.247: 100.0000% ( 1) 00:12:05.250 00:12:05.250 ************************************ 00:12:05.250 END TEST nvme_overhead 00:12:05.250 ************************************ 00:12:05.250 00:12:05.250 real 0m1.316s 00:12:05.250 user 0m1.124s 00:12:05.250 sys 0m0.138s 00:12:05.250 12:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.250 12:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.250 12:33:14 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:05.250 12:33:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:12:05.250 12:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.250 12:33:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.250 ************************************ 00:12:05.250 START TEST nvme_arbitration 00:12:05.250 ************************************ 00:12:05.250 12:33:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:09.434 Initializing NVMe Controllers 00:12:09.434 Attached to 0000:00:06.0 00:12:09.434 Attached to 0000:00:07.0 00:12:09.434 Attached to 0000:00:09.0 00:12:09.434 Attached to 0000:00:08.0 00:12:09.434 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:09.434 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:09.434 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:09.434 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:09.434 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:09.434 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:09.434 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:09.434 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:09.434 Initialization complete. Launching workers. 00:12:09.434 Starting thread on core 1 with urgent priority queue 00:12:09.434 Starting thread on core 2 with urgent priority queue 00:12:09.434 Starting thread on core 3 with urgent priority queue 00:12:09.434 Starting thread on core 0 with urgent priority queue 00:12:09.434 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:12:09.434 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:12:09.434 QEMU NVMe Ctrl (12341 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:12:09.434 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:12:09.434 QEMU NVMe Ctrl (12343 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:12:09.434 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:12:09.434 ======================================================== 00:12:09.434 00:12:09.434 ************************************ 00:12:09.434 END TEST nvme_arbitration 00:12:09.434 ************************************ 00:12:09.434 00:12:09.434 real 0m3.556s 00:12:09.434 user 0m9.711s 00:12:09.434 sys 0m0.163s 00:12:09.434 12:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.434 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:09.434 12:33:17 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:12:09.434 12:33:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:09.434 12:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.434 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:12:09.434 ************************************ 00:12:09.434 START TEST nvme_single_aen 00:12:09.434 ************************************ 00:12:09.434 12:33:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:12:09.434 [2024-05-15 12:33:17.780538] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:09.434 [2024-05-15 12:33:17.780636] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.434 [2024-05-15 12:33:17.970049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:09.434 [2024-05-15 12:33:17.972031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:12:09.434 [2024-05-15 12:33:17.973490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:12:09.434 [2024-05-15 12:33:17.974877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:12:09.434 Asynchronous Event Request test 00:12:09.434 Attached to 0000:00:06.0 00:12:09.434 Attached to 0000:00:07.0 00:12:09.434 Attached to 0000:00:09.0 00:12:09.434 Attached to 0000:00:08.0 00:12:09.434 Reset controller to setup AER completions for this process 00:12:09.434 Registering asynchronous event callbacks... 00:12:09.434 Getting orig temperature thresholds of all controllers 00:12:09.434 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:09.434 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:09.434 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:09.434 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:09.434 Setting all controllers temperature threshold low to trigger AER 00:12:09.434 Waiting for all controllers temperature threshold to be set lower 00:12:09.434 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:09.434 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:12:09.434 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:09.434 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:12:09.434 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:09.434 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:12:09.434 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:09.434 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:12:09.434 Waiting for all controllers to trigger AER and reset threshold 00:12:09.434 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:09.434 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:09.434 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:09.434 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:09.435 Cleaning up... 00:12:09.435 ************************************ 00:12:09.435 END TEST nvme_single_aen 00:12:09.435 ************************************ 00:12:09.435 00:12:09.435 real 0m0.283s 00:12:09.435 user 0m0.112s 00:12:09.435 sys 0m0.127s 00:12:09.435 12:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.435 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:09.435 12:33:18 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:09.435 12:33:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:09.435 12:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.435 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:12:09.435 ************************************ 00:12:09.435 START TEST nvme_doorbell_aers 00:12:09.435 ************************************ 00:12:09.435 12:33:18 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:12:09.435 12:33:18 -- nvme/nvme.sh@70 -- # bdfs=() 00:12:09.435 12:33:18 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:09.435 12:33:18 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:09.435 12:33:18 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:09.435 12:33:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:09.435 12:33:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:09.435 12:33:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:09.435 12:33:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:09.435 12:33:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:09.435 12:33:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:09.435 12:33:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:09.435 12:33:18 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:09.435 12:33:18 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:09.435 [2024-05-15 12:33:18.399564] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:19.406 Executing: test_write_invalid_db 00:12:19.406 Waiting for AER completion... 00:12:19.406 Failure: test_write_invalid_db 00:12:19.406 00:12:19.406 Executing: test_invalid_db_write_overflow_sq 00:12:19.406 Waiting for AER completion... 00:12:19.406 Failure: test_invalid_db_write_overflow_sq 00:12:19.406 00:12:19.406 Executing: test_invalid_db_write_overflow_cq 00:12:19.406 Waiting for AER completion... 00:12:19.406 Failure: test_invalid_db_write_overflow_cq 00:12:19.406 00:12:19.406 12:33:28 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:19.406 12:33:28 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:19.665 [2024-05-15 12:33:28.456405] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:29.640 Executing: test_write_invalid_db 00:12:29.640 Waiting for AER completion... 00:12:29.640 Failure: test_write_invalid_db 00:12:29.640 00:12:29.640 Executing: test_invalid_db_write_overflow_sq 00:12:29.640 Waiting for AER completion... 00:12:29.640 Failure: test_invalid_db_write_overflow_sq 00:12:29.640 00:12:29.640 Executing: test_invalid_db_write_overflow_cq 00:12:29.640 Waiting for AER completion... 00:12:29.640 Failure: test_invalid_db_write_overflow_cq 00:12:29.640 00:12:29.640 12:33:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:29.640 12:33:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:29.640 [2024-05-15 12:33:38.471633] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:39.626 Executing: test_write_invalid_db 00:12:39.626 Waiting for AER completion... 00:12:39.626 Failure: test_write_invalid_db 00:12:39.626 00:12:39.626 Executing: test_invalid_db_write_overflow_sq 00:12:39.626 Waiting for AER completion... 00:12:39.626 Failure: test_invalid_db_write_overflow_sq 00:12:39.626 00:12:39.626 Executing: test_invalid_db_write_overflow_cq 00:12:39.626 Waiting for AER completion... 00:12:39.626 Failure: test_invalid_db_write_overflow_cq 00:12:39.626 00:12:39.626 12:33:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:39.626 12:33:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:39.626 [2024-05-15 12:33:48.537294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.601 Executing: test_write_invalid_db 00:12:49.601 Waiting for AER completion... 00:12:49.601 Failure: test_write_invalid_db 00:12:49.601 00:12:49.601 Executing: test_invalid_db_write_overflow_sq 00:12:49.601 Waiting for AER completion... 00:12:49.601 Failure: test_invalid_db_write_overflow_sq 00:12:49.601 00:12:49.601 Executing: test_invalid_db_write_overflow_cq 00:12:49.601 Waiting for AER completion... 00:12:49.601 Failure: test_invalid_db_write_overflow_cq 00:12:49.601 00:12:49.601 00:12:49.601 real 0m40.247s 00:12:49.601 user 0m33.697s 00:12:49.601 sys 0m6.149s 00:12:49.601 12:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.601 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:12:49.601 ************************************ 00:12:49.601 END TEST nvme_doorbell_aers 00:12:49.601 ************************************ 00:12:49.601 12:33:58 -- nvme/nvme.sh@97 -- # uname 00:12:49.601 12:33:58 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:49.601 12:33:58 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:49.601 12:33:58 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:49.601 12:33:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:49.601 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:12:49.601 ************************************ 00:12:49.601 START TEST nvme_multi_aen 00:12:49.601 ************************************ 00:12:49.601 12:33:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:49.601 [2024-05-15 12:33:58.428714] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:49.601 [2024-05-15 12:33:58.428850] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.860 [2024-05-15 12:33:58.623275] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:49.860 [2024-05-15 12:33:58.623364] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.623405] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.623426] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.625172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:12:49.860 [2024-05-15 12:33:58.625212] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.625235] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.625254] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.626589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:12:49.860 [2024-05-15 12:33:58.626626] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.626651] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.626673] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.628020] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:12:49.860 [2024-05-15 12:33:58.628053] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.628074] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.628092] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65495) is not found. Dropping the request. 00:12:49.860 [2024-05-15 12:33:58.637453] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:49.860 Child process pid: 66009 00:12:49.860 [2024-05-15 12:33:58.637697] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.119 [Child] Asynchronous Event Request test 00:12:50.119 [Child] Attached to 0000:00:06.0 00:12:50.119 [Child] Attached to 0000:00:07.0 00:12:50.119 [Child] Attached to 0000:00:09.0 00:12:50.119 [Child] Attached to 0000:00:08.0 00:12:50.119 [Child] Registering asynchronous event callbacks... 00:12:50.119 [Child] Getting orig temperature thresholds of all controllers 00:12:50.119 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:50.119 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.119 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.119 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.119 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.119 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.119 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.119 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.119 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.119 [Child] Cleaning up... 00:12:50.119 Asynchronous Event Request test 00:12:50.119 Attached to 0000:00:06.0 00:12:50.119 Attached to 0000:00:07.0 00:12:50.119 Attached to 0000:00:09.0 00:12:50.119 Attached to 0000:00:08.0 00:12:50.119 Reset controller to setup AER completions for this process 00:12:50.119 Registering asynchronous event callbacks... 00:12:50.119 Getting orig temperature thresholds of all controllers 00:12:50.119 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.119 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:50.120 Setting all controllers temperature threshold low to trigger AER 00:12:50.120 Waiting for all controllers temperature threshold to be set lower 00:12:50.120 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.120 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:12:50.120 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.120 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:12:50.120 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.120 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:12:50.120 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:50.120 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:12:50.120 Waiting for all controllers to trigger AER and reset threshold 00:12:50.120 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.120 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.120 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.120 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:50.120 Cleaning up... 00:12:50.120 ************************************ 00:12:50.120 END TEST nvme_multi_aen 00:12:50.120 ************************************ 00:12:50.120 00:12:50.120 real 0m0.592s 00:12:50.120 user 0m0.217s 00:12:50.120 sys 0m0.273s 00:12:50.120 12:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.120 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:12:50.120 12:33:58 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:50.120 12:33:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:50.120 12:33:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.120 12:33:58 -- common/autotest_common.sh@10 -- # set +x 00:12:50.120 ************************************ 00:12:50.120 START TEST nvme_startup 00:12:50.120 ************************************ 00:12:50.120 12:33:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:50.379 Initializing NVMe Controllers 00:12:50.379 Attached to 0000:00:06.0 00:12:50.379 Attached to 0000:00:07.0 00:12:50.379 Attached to 0000:00:09.0 00:12:50.379 Attached to 0000:00:08.0 00:12:50.379 Initialization complete. 00:12:50.379 Time used:204802.906 (us). 00:12:50.379 ************************************ 00:12:50.379 END TEST nvme_startup 00:12:50.379 ************************************ 00:12:50.379 00:12:50.379 real 0m0.291s 00:12:50.379 user 0m0.097s 00:12:50.379 sys 0m0.148s 00:12:50.379 12:33:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.379 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:12:50.379 12:33:59 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:50.379 12:33:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:50.379 12:33:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.379 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:12:50.379 ************************************ 00:12:50.379 START TEST nvme_multi_secondary 00:12:50.379 ************************************ 00:12:50.379 12:33:59 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:12:50.379 12:33:59 -- nvme/nvme.sh@52 -- # pid0=66065 00:12:50.379 12:33:59 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:50.379 12:33:59 -- nvme/nvme.sh@54 -- # pid1=66066 00:12:50.379 12:33:59 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:50.379 12:33:59 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:54.564 Initializing NVMe Controllers 00:12:54.564 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:54.564 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:54.564 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:54.564 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:54.564 Initialization complete. Launching workers. 00:12:54.564 ======================================================== 00:12:54.564 Latency(us) 00:12:54.564 Device Information : IOPS MiB/s Average min max 00:12:54.564 PCIE (0000:00:06.0) NSID 1 from core 1: 5901.35 23.05 2709.51 1222.87 6479.14 00:12:54.564 PCIE (0000:00:07.0) NSID 1 from core 1: 5901.35 23.05 2710.82 1275.90 6061.70 00:12:54.564 PCIE (0000:00:09.0) NSID 1 from core 1: 5901.35 23.05 2710.84 1215.41 6220.67 00:12:54.564 PCIE (0000:00:08.0) NSID 1 from core 1: 5901.35 23.05 2710.71 1284.15 7146.77 00:12:54.564 PCIE (0000:00:08.0) NSID 2 from core 1: 5901.35 23.05 2710.77 1242.00 6733.61 00:12:54.564 PCIE (0000:00:08.0) NSID 3 from core 1: 5901.35 23.05 2710.71 1265.27 6776.15 00:12:54.564 ======================================================== 00:12:54.564 Total : 35408.11 138.31 2710.56 1215.41 7146.77 00:12:54.564 00:12:54.564 Initializing NVMe Controllers 00:12:54.564 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:54.564 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:54.564 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:54.564 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:54.564 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:54.564 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:54.564 Initialization complete. Launching workers. 00:12:54.564 ======================================================== 00:12:54.564 Latency(us) 00:12:54.564 Device Information : IOPS MiB/s Average min max 00:12:54.564 PCIE (0000:00:06.0) NSID 1 from core 2: 2530.32 9.88 6321.61 1858.41 17752.93 00:12:54.564 PCIE (0000:00:07.0) NSID 1 from core 2: 2530.32 9.88 6322.88 1866.65 13890.78 00:12:54.564 PCIE (0000:00:09.0) NSID 1 from core 2: 2530.32 9.88 6322.20 1624.52 14166.35 00:12:54.564 PCIE (0000:00:08.0) NSID 1 from core 2: 2530.32 9.88 6322.57 2026.08 14504.61 00:12:54.565 PCIE (0000:00:08.0) NSID 2 from core 2: 2530.32 9.88 6322.74 2034.13 14339.03 00:12:54.565 PCIE (0000:00:08.0) NSID 3 from core 2: 2530.32 9.88 6318.42 1842.22 14403.13 00:12:54.565 ======================================================== 00:12:54.565 Total : 15181.93 59.30 6321.74 1624.52 17752.93 00:12:54.565 00:12:54.565 12:34:03 -- nvme/nvme.sh@56 -- # wait 66065 00:12:55.943 Initializing NVMe Controllers 00:12:55.943 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:55.943 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:55.943 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:55.943 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:55.943 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:55.943 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:55.943 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:55.943 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:55.943 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:55.943 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:55.943 Initialization complete. Launching workers. 00:12:55.943 ======================================================== 00:12:55.943 Latency(us) 00:12:55.943 Device Information : IOPS MiB/s Average min max 00:12:55.943 PCIE (0000:00:06.0) NSID 1 from core 0: 8667.29 33.86 1844.50 871.37 6940.73 00:12:55.943 PCIE (0000:00:07.0) NSID 1 from core 0: 8667.29 33.86 1845.58 879.21 6690.11 00:12:55.943 PCIE (0000:00:09.0) NSID 1 from core 0: 8667.09 33.86 1845.58 895.32 7292.99 00:12:55.943 PCIE (0000:00:08.0) NSID 1 from core 0: 8667.29 33.86 1845.49 869.56 8046.25 00:12:55.943 PCIE (0000:00:08.0) NSID 2 from core 0: 8667.29 33.86 1845.45 813.96 7715.56 00:12:55.943 PCIE (0000:00:08.0) NSID 3 from core 0: 8667.29 33.86 1845.40 746.67 7653.05 00:12:55.943 ======================================================== 00:12:55.943 Total : 52003.53 203.14 1845.34 746.67 8046.25 00:12:55.943 00:12:55.943 12:34:04 -- nvme/nvme.sh@57 -- # wait 66066 00:12:55.943 12:34:04 -- nvme/nvme.sh@61 -- # pid0=66136 00:12:55.943 12:34:04 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:55.943 12:34:04 -- nvme/nvme.sh@63 -- # pid1=66137 00:12:55.943 12:34:04 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:55.943 12:34:04 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:00.132 Initializing NVMe Controllers 00:13:00.132 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:13:00.132 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:13:00.132 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:13:00.132 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:13:00.132 Initialization complete. Launching workers. 00:13:00.132 ======================================================== 00:13:00.132 Latency(us) 00:13:00.132 Device Information : IOPS MiB/s Average min max 00:13:00.132 PCIE (0000:00:06.0) NSID 1 from core 1: 6014.08 23.49 2658.57 980.00 6366.20 00:13:00.132 PCIE (0000:00:07.0) NSID 1 from core 1: 6014.08 23.49 2660.34 1011.23 6119.39 00:13:00.132 PCIE (0000:00:09.0) NSID 1 from core 1: 6014.08 23.49 2660.37 1022.22 5836.10 00:13:00.132 PCIE (0000:00:08.0) NSID 1 from core 1: 6014.08 23.49 2660.27 1009.16 5255.28 00:13:00.132 PCIE (0000:00:08.0) NSID 2 from core 1: 6014.08 23.49 2660.37 1022.94 6033.00 00:13:00.132 PCIE (0000:00:08.0) NSID 3 from core 1: 6014.08 23.49 2660.44 1013.24 6403.62 00:13:00.132 ======================================================== 00:13:00.132 Total : 36084.45 140.95 2660.06 980.00 6403.62 00:13:00.132 00:13:00.132 Initializing NVMe Controllers 00:13:00.132 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:13:00.132 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:13:00.132 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:00.132 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:13:00.132 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:13:00.132 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:13:00.132 Initialization complete. Launching workers. 00:13:00.132 ======================================================== 00:13:00.132 Latency(us) 00:13:00.132 Device Information : IOPS MiB/s Average min max 00:13:00.132 PCIE (0000:00:06.0) NSID 1 from core 0: 6063.71 23.69 2636.81 1018.32 5883.41 00:13:00.132 PCIE (0000:00:07.0) NSID 1 from core 0: 6063.71 23.69 2638.28 1034.20 5791.50 00:13:00.132 PCIE (0000:00:09.0) NSID 1 from core 0: 6063.71 23.69 2638.20 1015.54 5589.97 00:13:00.132 PCIE (0000:00:08.0) NSID 1 from core 0: 6063.71 23.69 2638.12 1019.69 5970.48 00:13:00.132 PCIE (0000:00:08.0) NSID 2 from core 0: 6063.71 23.69 2638.03 1041.43 6056.71 00:13:00.132 PCIE (0000:00:08.0) NSID 3 from core 0: 6063.71 23.69 2638.04 1043.42 6583.39 00:13:00.132 ======================================================== 00:13:00.132 Total : 36382.24 142.12 2637.91 1015.54 6583.39 00:13:00.132 00:13:01.559 Initializing NVMe Controllers 00:13:01.559 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:01.559 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:13:01.559 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:13:01.559 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:13:01.559 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:13:01.559 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:13:01.559 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:13:01.559 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:13:01.559 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:13:01.559 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:13:01.559 Initialization complete. Launching workers. 00:13:01.559 ======================================================== 00:13:01.559 Latency(us) 00:13:01.559 Device Information : IOPS MiB/s Average min max 00:13:01.559 PCIE (0000:00:06.0) NSID 1 from core 2: 3838.75 15.00 4165.41 888.99 12504.92 00:13:01.559 PCIE (0000:00:07.0) NSID 1 from core 2: 3838.75 15.00 4167.30 907.61 12504.33 00:13:01.559 PCIE (0000:00:09.0) NSID 1 from core 2: 3838.75 15.00 4167.21 909.59 12522.83 00:13:01.559 PCIE (0000:00:08.0) NSID 1 from core 2: 3838.75 15.00 4167.35 904.68 12746.72 00:13:01.559 PCIE (0000:00:08.0) NSID 2 from core 2: 3838.75 15.00 4165.79 897.10 12456.55 00:13:01.559 PCIE (0000:00:08.0) NSID 3 from core 2: 3838.75 15.00 4163.81 778.13 12804.75 00:13:01.559 ======================================================== 00:13:01.559 Total : 23032.52 89.97 4166.14 778.13 12804.75 00:13:01.559 00:13:01.559 12:34:10 -- nvme/nvme.sh@65 -- # wait 66136 00:13:01.559 12:34:10 -- nvme/nvme.sh@66 -- # wait 66137 00:13:01.559 00:13:01.559 real 0m10.887s 00:13:01.559 user 0m19.111s 00:13:01.559 sys 0m0.917s 00:13:01.559 12:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.559 12:34:10 -- common/autotest_common.sh@10 -- # set +x 00:13:01.559 ************************************ 00:13:01.559 END TEST nvme_multi_secondary 00:13:01.559 ************************************ 00:13:01.559 12:34:10 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:01.559 12:34:10 -- nvme/nvme.sh@102 -- # kill_stub 00:13:01.559 12:34:10 -- common/autotest_common.sh@1065 -- # [[ -e /proc/65059 ]] 00:13:01.559 12:34:10 -- common/autotest_common.sh@1066 -- # kill 65059 00:13:01.559 12:34:10 -- common/autotest_common.sh@1067 -- # wait 65059 00:13:02.492 [2024-05-15 12:34:11.263476] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:02.492 [2024-05-15 12:34:11.263559] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:02.492 [2024-05-15 12:34:11.263585] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:02.492 [2024-05-15 12:34:11.263625] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.057 [2024-05-15 12:34:11.784391] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.057 [2024-05-15 12:34:11.784482] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.057 [2024-05-15 12:34:11.784536] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.057 [2024-05-15 12:34:11.784563] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.992 [2024-05-15 12:34:12.782192] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.992 [2024-05-15 12:34:12.782302] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.992 [2024-05-15 12:34:12.782327] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:03.992 [2024-05-15 12:34:12.782360] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:04.925 [2024-05-15 12:34:13.796476] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:04.926 [2024-05-15 12:34:13.796586] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:04.926 [2024-05-15 12:34:13.796613] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:04.926 [2024-05-15 12:34:13.796640] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66008) is not found. Dropping the request. 00:13:05.184 12:34:14 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:13:05.184 12:34:14 -- common/autotest_common.sh@1073 -- # echo 2 00:13:05.184 12:34:14 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:05.184 12:34:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:05.184 12:34:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.184 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.184 ************************************ 00:13:05.184 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:05.184 ************************************ 00:13:05.184 12:34:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:05.184 * Looking for test storage... 00:13:05.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:05.184 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:05.184 12:34:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:05.184 12:34:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:05.184 12:34:14 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:05.184 12:34:14 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:05.184 12:34:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:05.184 12:34:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:05.184 12:34:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:05.184 12:34:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:05.184 12:34:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:05.442 12:34:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:05.442 12:34:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:05.442 12:34:14 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:13:05.442 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:13:05.442 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:13:05.442 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66323 00:13:05.442 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:05.442 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66323 00:13:05.442 12:34:14 -- common/autotest_common.sh@819 -- # '[' -z 66323 ']' 00:13:05.442 12:34:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.442 12:34:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:05.443 12:34:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.443 12:34:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:05.443 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.443 12:34:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:05.443 [2024-05-15 12:34:14.407186] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:05.443 [2024-05-15 12:34:14.407356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66323 ] 00:13:05.716 [2024-05-15 12:34:14.603957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.021 [2024-05-15 12:34:14.889277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.021 [2024-05-15 12:34:14.889717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.021 [2024-05-15 12:34:14.889789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.021 [2024-05-15 12:34:14.889923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.021 [2024-05-15 12:34:14.889943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.393 12:34:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.393 12:34:16 -- common/autotest_common.sh@852 -- # return 0 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:13:07.393 12:34:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.393 12:34:16 -- common/autotest_common.sh@10 -- # set +x 00:13:07.393 nvme0n1 00:13:07.393 12:34:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_IY5g6.txt 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:07.393 12:34:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.393 12:34:16 -- common/autotest_common.sh@10 -- # set +x 00:13:07.393 true 00:13:07.393 12:34:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715776456 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66355 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.393 12:34:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:09.297 12:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.297 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 [2024-05-15 12:34:18.184587] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:13:09.297 [2024-05-15 12:34:18.185034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:09.297 [2024-05-15 12:34:18.185079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:09.297 [2024-05-15 12:34:18.185100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.297 [2024-05-15 12:34:18.187022] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:09.297 12:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66355 00:13:09.297 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66355 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66355 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:09.297 12:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.297 12:34:18 -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 12:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_IY5g6.txt 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_IY5g6.txt 00:13:09.297 12:34:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66323 00:13:09.297 12:34:18 -- common/autotest_common.sh@926 -- # '[' -z 66323 ']' 00:13:09.297 12:34:18 -- common/autotest_common.sh@930 -- # kill -0 66323 00:13:09.297 12:34:18 -- common/autotest_common.sh@931 -- # uname 00:13:09.297 12:34:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:09.297 12:34:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66323 00:13:09.556 12:34:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:09.556 killing process with pid 66323 00:13:09.556 12:34:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:09.556 12:34:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66323' 00:13:09.556 12:34:18 -- common/autotest_common.sh@945 -- # kill 66323 00:13:09.556 12:34:18 -- common/autotest_common.sh@950 -- # wait 66323 00:13:12.092 12:34:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:12.092 12:34:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:12.092 00:13:12.092 real 0m6.429s 00:13:12.092 user 0m22.343s 00:13:12.092 sys 0m0.774s 00:13:12.092 12:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.092 12:34:20 -- common/autotest_common.sh@10 -- # set +x 00:13:12.092 ************************************ 00:13:12.092 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:12.092 ************************************ 00:13:12.092 12:34:20 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:12.092 12:34:20 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:12.092 12:34:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:12.092 12:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.093 12:34:20 -- common/autotest_common.sh@10 -- # set +x 00:13:12.093 ************************************ 00:13:12.093 START TEST nvme_fio 00:13:12.093 ************************************ 00:13:12.093 12:34:20 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:13:12.093 12:34:20 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:12.093 12:34:20 -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:12.093 12:34:20 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:12.093 12:34:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:12.093 12:34:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:12.093 12:34:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:12.093 12:34:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:12.093 12:34:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:12.093 12:34:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:12.093 12:34:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:12.093 12:34:20 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:13:12.093 12:34:20 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:12.093 12:34:20 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:12.093 12:34:20 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:12.093 12:34:20 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:12.093 12:34:20 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:12.093 12:34:20 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:12.351 12:34:21 -- nvme/nvme.sh@41 -- # bs=4096 00:13:12.351 12:34:21 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:12.351 12:34:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:12.351 12:34:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:12.351 12:34:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:12.351 12:34:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:12.351 12:34:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:12.351 12:34:21 -- common/autotest_common.sh@1320 -- # shift 00:13:12.351 12:34:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:12.351 12:34:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:12.351 12:34:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:12.351 12:34:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:12.351 12:34:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:12.351 12:34:21 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:12.351 12:34:21 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:12.351 12:34:21 -- common/autotest_common.sh@1326 -- # break 00:13:12.351 12:34:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:12.351 12:34:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:12.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:12.351 fio-3.35 00:13:12.351 Starting 1 thread 00:13:15.637 00:13:15.637 test: (groupid=0, jobs=1): err= 0: pid=66506: Wed May 15 12:34:24 2024 00:13:15.637 read: IOPS=17.3k, BW=67.7MiB/s (70.9MB/s)(135MiB/2001msec) 00:13:15.637 slat (nsec): min=4681, max=60023, avg=6212.93, stdev=1947.47 00:13:15.637 clat (usec): min=241, max=9617, avg=3669.92, stdev=705.25 00:13:15.637 lat (usec): min=247, max=9677, avg=3676.14, stdev=706.34 00:13:15.637 clat percentiles (usec): 00:13:15.637 | 1.00th=[ 2999], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:13:15.637 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3523], 00:13:15.637 | 70.00th=[ 3589], 80.00th=[ 3687], 90.00th=[ 4293], 95.00th=[ 5211], 00:13:15.637 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7111], 99.95th=[ 7898], 00:13:15.637 | 99.99th=[ 9372] 00:13:15.637 bw ( KiB/s): min=58267, max=74640, per=99.60%, avg=69009.00, stdev=9306.48, samples=3 00:13:15.637 iops : min=14566, max=18660, avg=17252.00, stdev=2327.05, samples=3 00:13:15.637 write: IOPS=17.3k, BW=67.7MiB/s (71.0MB/s)(135MiB/2001msec); 0 zone resets 00:13:15.637 slat (nsec): min=4786, max=53369, avg=6360.99, stdev=1997.06 00:13:15.637 clat (usec): min=337, max=9409, avg=3685.98, stdev=713.63 00:13:15.637 lat (usec): min=343, max=9422, avg=3692.34, stdev=714.70 00:13:15.637 clat percentiles (usec): 00:13:15.637 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:13:15.637 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:13:15.637 | 70.00th=[ 3589], 80.00th=[ 3687], 90.00th=[ 4293], 95.00th=[ 5211], 00:13:15.637 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7111], 99.95th=[ 7963], 00:13:15.637 | 99.99th=[ 9241] 00:13:15.637 bw ( KiB/s): min=58578, max=74280, per=99.37%, avg=68904.67, stdev=8945.67, samples=3 00:13:15.637 iops : min=14644, max=18570, avg=17226.00, stdev=2236.71, samples=3 00:13:15.637 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:13:15.637 lat (msec) : 2=0.17%, 4=87.39%, 10=12.39% 00:13:15.637 cpu : usr=98.85%, sys=0.25%, ctx=4, majf=0, minf=607 00:13:15.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:15.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.637 issued rwts: total=34661,34687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.637 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.637 00:13:15.637 Run status group 0 (all jobs): 00:13:15.637 READ: bw=67.7MiB/s (70.9MB/s), 67.7MiB/s-67.7MiB/s (70.9MB/s-70.9MB/s), io=135MiB (142MB), run=2001-2001msec 00:13:15.637 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=135MiB (142MB), run=2001-2001msec 00:13:15.895 ----------------------------------------------------- 00:13:15.895 Suppressions used: 00:13:15.895 count bytes template 00:13:15.895 1 32 /usr/src/fio/parse.c 00:13:15.895 1 8 libtcmalloc_minimal.so 00:13:15.895 ----------------------------------------------------- 00:13:15.895 00:13:15.895 12:34:24 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:15.895 12:34:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:15.895 12:34:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:15.895 12:34:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:16.155 12:34:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:16.155 12:34:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:16.413 12:34:25 -- nvme/nvme.sh@41 -- # bs=4096 00:13:16.413 12:34:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:16.413 12:34:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:16.413 12:34:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:16.413 12:34:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.413 12:34:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:16.413 12:34:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:16.413 12:34:25 -- common/autotest_common.sh@1320 -- # shift 00:13:16.413 12:34:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:16.413 12:34:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.413 12:34:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:16.413 12:34:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:16.413 12:34:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:16.413 12:34:25 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.413 12:34:25 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.413 12:34:25 -- common/autotest_common.sh@1326 -- # break 00:13:16.413 12:34:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:16.413 12:34:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:16.671 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:16.671 fio-3.35 00:13:16.671 Starting 1 thread 00:13:19.953 00:13:19.953 test: (groupid=0, jobs=1): err= 0: pid=66571: Wed May 15 12:34:28 2024 00:13:19.953 read: IOPS=16.2k, BW=63.3MiB/s (66.3MB/s)(127MiB/2001msec) 00:13:19.953 slat (nsec): min=4642, max=65297, avg=6649.21, stdev=2227.32 00:13:19.953 clat (usec): min=233, max=9468, avg=3930.32, stdev=594.66 00:13:19.953 lat (usec): min=239, max=9519, avg=3936.97, stdev=595.51 00:13:19.953 clat percentiles (usec): 00:13:19.953 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:13:19.953 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3851], 00:13:19.953 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4817], 00:13:19.953 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7963], 00:13:19.953 | 99.99th=[ 9241] 00:13:19.953 bw ( KiB/s): min=59360, max=65680, per=97.37%, avg=63066.67, stdev=3298.81, samples=3 00:13:19.953 iops : min=14840, max=16420, avg=15766.67, stdev=824.70, samples=3 00:13:19.954 write: IOPS=16.2k, BW=63.4MiB/s (66.5MB/s)(127MiB/2001msec); 0 zone resets 00:13:19.954 slat (usec): min=4, max=112, avg= 6.83, stdev= 2.30 00:13:19.954 clat (usec): min=304, max=9333, avg=3933.24, stdev=593.87 00:13:19.954 lat (usec): min=310, max=9345, avg=3940.06, stdev=594.72 00:13:19.954 clat percentiles (usec): 00:13:19.954 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:13:19.954 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3851], 00:13:19.954 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4817], 00:13:19.954 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 8094], 00:13:19.954 | 99.99th=[ 9110] 00:13:19.954 bw ( KiB/s): min=59696, max=65088, per=96.64%, avg=62733.33, stdev=2760.06, samples=3 00:13:19.954 iops : min=14924, max=16272, avg=15683.33, stdev=690.02, samples=3 00:13:19.954 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:13:19.954 lat (msec) : 2=0.06%, 4=64.60%, 10=35.30% 00:13:19.954 cpu : usr=99.05%, sys=0.00%, ctx=7, majf=0, minf=606 00:13:19.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:19.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:19.954 issued rwts: total=32401,32474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:19.954 00:13:19.954 Run status group 0 (all jobs): 00:13:19.954 READ: bw=63.3MiB/s (66.3MB/s), 63.3MiB/s-63.3MiB/s (66.3MB/s-66.3MB/s), io=127MiB (133MB), run=2001-2001msec 00:13:19.954 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=127MiB (133MB), run=2001-2001msec 00:13:19.954 ----------------------------------------------------- 00:13:19.954 Suppressions used: 00:13:19.954 count bytes template 00:13:19.954 1 32 /usr/src/fio/parse.c 00:13:19.954 1 8 libtcmalloc_minimal.so 00:13:19.954 ----------------------------------------------------- 00:13:19.954 00:13:19.954 12:34:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:19.954 12:34:28 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:19.954 12:34:28 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:19.954 12:34:28 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:20.213 12:34:29 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:20.213 12:34:29 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:20.472 12:34:29 -- nvme/nvme.sh@41 -- # bs=4096 00:13:20.472 12:34:29 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:20.472 12:34:29 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:20.472 12:34:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:20.472 12:34:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.472 12:34:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:20.472 12:34:29 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:20.472 12:34:29 -- common/autotest_common.sh@1320 -- # shift 00:13:20.472 12:34:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:20.472 12:34:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.472 12:34:29 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:20.472 12:34:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:20.472 12:34:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:20.472 12:34:29 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.472 12:34:29 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.472 12:34:29 -- common/autotest_common.sh@1326 -- # break 00:13:20.472 12:34:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:20.472 12:34:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:20.730 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:20.730 fio-3.35 00:13:20.730 Starting 1 thread 00:13:24.967 00:13:24.967 test: (groupid=0, jobs=1): err= 0: pid=66634: Wed May 15 12:34:33 2024 00:13:24.967 read: IOPS=17.6k, BW=68.7MiB/s (72.0MB/s)(137MiB/2001msec) 00:13:24.967 slat (nsec): min=4645, max=86923, avg=6299.22, stdev=2045.40 00:13:24.967 clat (usec): min=350, max=9232, avg=3618.90, stdev=444.68 00:13:24.967 lat (usec): min=356, max=9275, avg=3625.20, stdev=445.30 00:13:24.967 clat percentiles (usec): 00:13:24.967 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:13:24.967 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:13:24.967 | 70.00th=[ 3621], 80.00th=[ 3785], 90.00th=[ 4228], 95.00th=[ 4359], 00:13:24.967 | 99.00th=[ 4817], 99.50th=[ 6259], 99.90th=[ 7701], 99.95th=[ 8094], 00:13:24.967 | 99.99th=[ 9110] 00:13:24.967 bw ( KiB/s): min=66600, max=74296, per=100.00%, avg=71394.67, stdev=4182.78, samples=3 00:13:24.967 iops : min=16650, max=18574, avg=17848.67, stdev=1045.69, samples=3 00:13:24.967 write: IOPS=17.6k, BW=68.7MiB/s (72.1MB/s)(138MiB/2001msec); 0 zone resets 00:13:24.967 slat (nsec): min=4804, max=94741, avg=6503.91, stdev=2151.80 00:13:24.967 clat (usec): min=316, max=9169, avg=3629.29, stdev=452.34 00:13:24.967 lat (usec): min=324, max=9181, avg=3635.79, stdev=452.95 00:13:24.967 clat percentiles (usec): 00:13:24.967 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:13:24.967 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3556], 00:13:24.967 | 70.00th=[ 3621], 80.00th=[ 3818], 90.00th=[ 4228], 95.00th=[ 4359], 00:13:24.967 | 99.00th=[ 4817], 99.50th=[ 6325], 99.90th=[ 7898], 99.95th=[ 8225], 00:13:24.967 | 99.99th=[ 9110] 00:13:24.967 bw ( KiB/s): min=66840, max=74136, per=100.00%, avg=71306.67, stdev=3913.89, samples=3 00:13:24.967 iops : min=16710, max=18534, avg=17826.67, stdev=978.47, samples=3 00:13:24.967 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:24.967 lat (msec) : 2=0.05%, 4=82.79%, 10=17.13% 00:13:24.967 cpu : usr=99.10%, sys=0.05%, ctx=9, majf=0, minf=606 00:13:24.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:24.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.967 issued rwts: total=35183,35210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.967 00:13:24.967 Run status group 0 (all jobs): 00:13:24.967 READ: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:13:24.967 WRITE: bw=68.7MiB/s (72.1MB/s), 68.7MiB/s-68.7MiB/s (72.1MB/s-72.1MB/s), io=138MiB (144MB), run=2001-2001msec 00:13:24.967 ----------------------------------------------------- 00:13:24.967 Suppressions used: 00:13:24.967 count bytes template 00:13:24.967 1 32 /usr/src/fio/parse.c 00:13:24.967 1 8 libtcmalloc_minimal.so 00:13:24.967 ----------------------------------------------------- 00:13:24.967 00:13:24.967 12:34:33 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:24.967 12:34:33 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:24.967 12:34:33 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:24.967 12:34:33 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:24.967 12:34:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:24.967 12:34:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:24.967 12:34:33 -- nvme/nvme.sh@41 -- # bs=4096 00:13:24.967 12:34:33 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:24.967 12:34:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:24.967 12:34:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:24.967 12:34:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.967 12:34:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:24.967 12:34:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:24.967 12:34:33 -- common/autotest_common.sh@1320 -- # shift 00:13:24.967 12:34:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:24.967 12:34:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.967 12:34:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:24.967 12:34:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:24.967 12:34:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:24.967 12:34:33 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.967 12:34:33 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.967 12:34:33 -- common/autotest_common.sh@1326 -- # break 00:13:24.967 12:34:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:24.967 12:34:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:25.224 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:25.224 fio-3.35 00:13:25.224 Starting 1 thread 00:13:29.469 00:13:29.469 test: (groupid=0, jobs=1): err= 0: pid=66700: Wed May 15 12:34:38 2024 00:13:29.469 read: IOPS=16.2k, BW=63.1MiB/s (66.2MB/s)(126MiB/2001msec) 00:13:29.469 slat (nsec): min=5188, max=63976, avg=6948.67, stdev=2052.22 00:13:29.469 clat (usec): min=248, max=9161, avg=3939.09, stdev=475.61 00:13:29.469 lat (usec): min=254, max=9168, avg=3946.04, stdev=476.26 00:13:29.469 clat percentiles (usec): 00:13:29.469 | 1.00th=[ 3195], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:13:29.469 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:13:29.469 | 70.00th=[ 3949], 80.00th=[ 4047], 90.00th=[ 4686], 95.00th=[ 4883], 00:13:29.469 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 8455], 99.95th=[ 8717], 00:13:29.469 | 99.99th=[ 8848] 00:13:29.469 bw ( KiB/s): min=55832, max=67176, per=98.01%, avg=63333.33, stdev=6497.00, samples=3 00:13:29.469 iops : min=13958, max=16794, avg=15833.33, stdev=1624.25, samples=3 00:13:29.469 write: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(126MiB/2001msec); 0 zone resets 00:13:29.469 slat (nsec): min=4966, max=49124, avg=6860.29, stdev=1885.45 00:13:29.469 clat (usec): min=274, max=9152, avg=3946.18, stdev=468.85 00:13:29.469 lat (usec): min=280, max=9158, avg=3953.04, stdev=469.43 00:13:29.469 clat percentiles (usec): 00:13:29.469 | 1.00th=[ 3228], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:13:29.469 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:13:29.469 | 70.00th=[ 3949], 80.00th=[ 4047], 90.00th=[ 4686], 95.00th=[ 4883], 00:13:29.469 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 8586], 99.95th=[ 8848], 00:13:29.469 | 99.99th=[ 8979] 00:13:29.469 bw ( KiB/s): min=56152, max=66552, per=97.40%, avg=63034.67, stdev=5961.05, samples=3 00:13:29.469 iops : min=14038, max=16638, avg=15758.67, stdev=1490.26, samples=3 00:13:29.469 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:29.469 lat (msec) : 2=0.05%, 4=76.43%, 10=23.48% 00:13:29.469 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=604 00:13:29.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:29.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.469 issued rwts: total=32326,32376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.469 00:13:29.469 Run status group 0 (all jobs): 00:13:29.469 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=126MiB (132MB), run=2001-2001msec 00:13:29.469 WRITE: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=126MiB (133MB), run=2001-2001msec 00:13:29.729 ----------------------------------------------------- 00:13:29.729 Suppressions used: 00:13:29.729 count bytes template 00:13:29.729 1 32 /usr/src/fio/parse.c 00:13:29.729 1 8 libtcmalloc_minimal.so 00:13:29.729 ----------------------------------------------------- 00:13:29.729 00:13:29.729 12:34:38 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:29.729 12:34:38 -- nvme/nvme.sh@46 -- # true 00:13:29.729 00:13:29.729 real 0m18.001s 00:13:29.729 user 0m14.137s 00:13:29.729 sys 0m3.475s 00:13:29.729 12:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.729 ************************************ 00:13:29.729 END TEST nvme_fio 00:13:29.729 ************************************ 00:13:29.729 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.729 00:13:29.729 real 1m36.534s 00:13:29.729 user 3m48.610s 00:13:29.729 sys 0m16.764s 00:13:29.729 12:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.729 ************************************ 00:13:29.729 END TEST nvme 00:13:29.729 ************************************ 00:13:29.729 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.729 12:34:38 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:29.729 12:34:38 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:29.729 12:34:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:29.729 12:34:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.729 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.729 ************************************ 00:13:29.729 START TEST nvme_scc 00:13:29.729 ************************************ 00:13:29.729 12:34:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:29.987 * Looking for test storage... 00:13:29.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:29.987 12:34:38 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:29.987 12:34:38 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:29.987 12:34:38 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:29.987 12:34:38 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:29.987 12:34:38 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.987 12:34:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.987 12:34:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.987 12:34:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.987 12:34:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.987 12:34:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.987 12:34:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.987 12:34:38 -- paths/export.sh@5 -- # export PATH 00:13:29.987 12:34:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.987 12:34:38 -- nvme/functions.sh@10 -- # ctrls=() 00:13:29.987 12:34:38 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:29.987 12:34:38 -- nvme/functions.sh@11 -- # nvmes=() 00:13:29.987 12:34:38 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:29.987 12:34:38 -- nvme/functions.sh@12 -- # bdfs=() 00:13:29.987 12:34:38 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:29.987 12:34:38 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:29.987 12:34:38 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:29.987 12:34:38 -- nvme/functions.sh@14 -- # nvme_name= 00:13:29.987 12:34:38 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.987 12:34:38 -- nvme/nvme_scc.sh@12 -- # uname 00:13:29.987 12:34:38 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:29.987 12:34:38 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:29.987 12:34:38 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:30.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:30.554 Waiting for block devices as requested 00:13:30.554 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.554 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.813 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.813 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:36.089 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:36.089 12:34:44 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:36.089 12:34:44 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:36.089 12:34:44 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:36.089 12:34:44 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:36.089 12:34:44 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:36.089 12:34:44 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:36.089 12:34:44 -- scripts/common.sh@15 -- # local i 00:13:36.089 12:34:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:36.089 12:34:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.089 12:34:44 -- scripts/common.sh@24 -- # return 0 00:13:36.089 12:34:44 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:36.089 12:34:44 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:36.089 12:34:44 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:36.089 12:34:44 -- nvme/functions.sh@18 -- # shift 00:13:36.089 12:34:44 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.089 12:34:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.089 12:34:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.089 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:36.089 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:36.089 12:34:44 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.089 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:36.089 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:36.089 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.089 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.089 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.090 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.090 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.090 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.091 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.091 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:36.091 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:36.092 12:34:44 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:36.092 12:34:44 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:36.092 12:34:44 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:36.092 12:34:44 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:36.092 12:34:44 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:36.092 12:34:44 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:36.092 12:34:44 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:36.092 12:34:44 -- scripts/common.sh@15 -- # local i 00:13:36.092 12:34:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:36.092 12:34:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.092 12:34:44 -- scripts/common.sh@24 -- # return 0 00:13:36.092 12:34:44 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:36.092 12:34:44 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:36.092 12:34:44 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@18 -- # shift 00:13:36.092 12:34:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.092 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:36.092 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:36.092 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.093 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:36.093 12:34:44 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:36.093 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:36.094 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.094 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.094 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:36.095 12:34:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:36.095 12:34:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:36.095 12:34:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:36.095 12:34:44 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@18 -- # shift 00:13:36.095 12:34:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.095 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.095 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:36.095 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:36.096 12:34:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:36.096 12:34:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:36.096 12:34:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:36.096 12:34:44 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@18 -- # shift 00:13:36.096 12:34:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.096 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:36.096 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.096 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:36.097 12:34:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:36.097 12:34:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:36.097 12:34:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:36.097 12:34:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:36.097 12:34:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:36.097 12:34:44 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@18 -- # shift 00:13:36.097 12:34:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.097 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.097 12:34:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:44 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:36.098 12:34:44 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.098 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:36.098 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.098 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:36.099 12:34:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:36.099 12:34:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:36.099 12:34:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:36.099 12:34:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:36.099 12:34:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:36.099 12:34:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:36.099 12:34:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:36.099 12:34:45 -- scripts/common.sh@15 -- # local i 00:13:36.099 12:34:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:36.099 12:34:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.099 12:34:45 -- scripts/common.sh@24 -- # return 0 00:13:36.099 12:34:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:36.099 12:34:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:36.099 12:34:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@18 -- # shift 00:13:36.099 12:34:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.099 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.099 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:36.099 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.100 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:36.100 12:34:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:36.100 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.101 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.101 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:36.101 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:36.364 12:34:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:36.364 12:34:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:36.364 12:34:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:36.364 12:34:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@18 -- # shift 00:13:36.364 12:34:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.364 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.364 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:36.364 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:36.365 12:34:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:36.365 12:34:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:36.365 12:34:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:36.365 12:34:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:36.365 12:34:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:36.365 12:34:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:36.365 12:34:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:36.365 12:34:45 -- scripts/common.sh@15 -- # local i 00:13:36.365 12:34:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:36.365 12:34:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.365 12:34:45 -- scripts/common.sh@24 -- # return 0 00:13:36.365 12:34:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:36.365 12:34:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:36.365 12:34:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@18 -- # shift 00:13:36.365 12:34:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:36.365 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.365 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.365 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.366 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.366 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:36.366 12:34:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.367 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:36.367 12:34:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:36.367 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:36.368 12:34:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:36.368 12:34:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:36.368 12:34:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:36.368 12:34:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@18 -- # shift 00:13:36.368 12:34:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.368 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.368 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.368 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:36.369 12:34:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:36.369 12:34:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:36.369 12:34:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:36.369 12:34:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:36.369 12:34:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:36.369 12:34:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:36.369 12:34:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:36.369 12:34:45 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:36.369 12:34:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:36.369 12:34:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:36.369 12:34:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:36.369 12:34:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:36.369 12:34:45 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:36.369 12:34:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:36.369 12:34:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:36.369 12:34:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:36.369 12:34:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:36.369 12:34:45 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:36.369 12:34:45 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:36.369 12:34:45 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:36.369 12:34:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:36.369 12:34:45 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:36.369 12:34:45 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:36.369 12:34:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:36.369 12:34:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # echo nvme1 00:13:36.370 12:34:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:36.370 12:34:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:36.370 12:34:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:36.370 12:34:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:36.370 12:34:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # echo nvme0 00:13:36.370 12:34:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:36.370 12:34:45 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:36.370 12:34:45 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:36.370 12:34:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:36.370 12:34:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # echo nvme3 00:13:36.370 12:34:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:36.370 12:34:45 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:36.370 12:34:45 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:36.370 12:34:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:36.370 12:34:45 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:36.370 12:34:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:36.370 12:34:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:36.370 12:34:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:36.370 12:34:45 -- nvme/functions.sh@197 -- # echo nvme2 00:13:36.370 12:34:45 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:36.370 12:34:45 -- nvme/functions.sh@206 -- # echo nvme1 00:13:36.370 12:34:45 -- nvme/functions.sh@207 -- # return 0 00:13:36.370 12:34:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:36.370 12:34:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:13:36.370 12:34:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.305 lsblk: /dev/nvme0c0n1: not a block device 00:13:37.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:37.564 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.564 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.564 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.822 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.822 12:34:46 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:37.822 12:34:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:37.822 12:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.822 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:13:37.822 ************************************ 00:13:37.822 START TEST nvme_simple_copy 00:13:37.822 ************************************ 00:13:37.822 12:34:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:38.081 Initializing NVMe Controllers 00:13:38.081 Attaching to 0000:00:08.0 00:13:38.081 Controller supports SCC. Attached to 0000:00:08.0 00:13:38.081 Namespace ID: 1 size: 4GB 00:13:38.081 Initialization complete. 00:13:38.081 00:13:38.081 Controller QEMU NVMe Ctrl (12342 ) 00:13:38.081 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:38.081 Namespace Block Size:4096 00:13:38.081 Writing LBAs 0 to 63 with Random Data 00:13:38.081 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:38.081 LBAs matching Written Data: 64 00:13:38.081 00:13:38.081 real 0m0.315s 00:13:38.081 user 0m0.124s 00:13:38.081 sys 0m0.089s 00:13:38.081 ************************************ 00:13:38.081 END TEST nvme_simple_copy 00:13:38.081 ************************************ 00:13:38.081 12:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.081 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 ************************************ 00:13:38.081 END TEST nvme_scc 00:13:38.081 ************************************ 00:13:38.081 00:13:38.081 real 0m8.370s 00:13:38.081 user 0m1.444s 00:13:38.081 sys 0m1.847s 00:13:38.081 12:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.081 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.340 12:34:47 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:13:38.340 12:34:47 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:13:38.340 12:34:47 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:13:38.340 12:34:47 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:13:38.340 12:34:47 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:38.340 12:34:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:38.340 12:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.340 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.340 ************************************ 00:13:38.340 START TEST nvme_fdp 00:13:38.340 ************************************ 00:13:38.340 12:34:47 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:13:38.340 * Looking for test storage... 00:13:38.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:38.340 12:34:47 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:38.340 12:34:47 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:38.340 12:34:47 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:38.340 12:34:47 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:38.340 12:34:47 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.340 12:34:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.340 12:34:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.340 12:34:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.340 12:34:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.340 12:34:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.340 12:34:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.340 12:34:47 -- paths/export.sh@5 -- # export PATH 00:13:38.340 12:34:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.340 12:34:47 -- nvme/functions.sh@10 -- # ctrls=() 00:13:38.340 12:34:47 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:38.340 12:34:47 -- nvme/functions.sh@11 -- # nvmes=() 00:13:38.340 12:34:47 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:38.341 12:34:47 -- nvme/functions.sh@12 -- # bdfs=() 00:13:38.341 12:34:47 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:38.341 12:34:47 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:38.341 12:34:47 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:38.341 12:34:47 -- nvme/functions.sh@14 -- # nvme_name= 00:13:38.341 12:34:47 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:38.341 12:34:47 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:38.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:38.908 Waiting for block devices as requested 00:13:38.908 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:38.908 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.167 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.167 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:44.449 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:44.449 12:34:53 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:44.449 12:34:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:44.449 12:34:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:44.449 12:34:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:44.449 12:34:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:44.449 12:34:53 -- scripts/common.sh@15 -- # local i 00:13:44.449 12:34:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:44.449 12:34:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:44.449 12:34:53 -- scripts/common.sh@24 -- # return 0 00:13:44.449 12:34:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:44.449 12:34:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:44.449 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.449 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.449 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:44.449 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.449 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.450 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:44.450 12:34:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.450 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.451 12:34:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:44.451 12:34:53 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.451 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:44.452 12:34:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:44.452 12:34:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:44.452 12:34:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:44.452 12:34:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:44.452 12:34:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:44.452 12:34:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:44.452 12:34:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:44.452 12:34:53 -- scripts/common.sh@15 -- # local i 00:13:44.452 12:34:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:44.452 12:34:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:44.452 12:34:53 -- scripts/common.sh@24 -- # return 0 00:13:44.452 12:34:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:44.452 12:34:53 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:44.452 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.452 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.452 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:44.452 12:34:53 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.452 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:44.453 12:34:53 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.453 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.453 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.454 12:34:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:44.454 12:34:53 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.454 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:44.455 12:34:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:44.455 12:34:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:44.455 12:34:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:44.455 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.455 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.455 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.455 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.455 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:44.456 12:34:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:44.456 12:34:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:44.456 12:34:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:44.456 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.456 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.456 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:44.456 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:44.456 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:44.457 12:34:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:44.457 12:34:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:44.457 12:34:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:44.457 12:34:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:44.457 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.457 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.457 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.457 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.458 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:44.458 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.458 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:44.459 12:34:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:44.459 12:34:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:44.459 12:34:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:44.459 12:34:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:44.459 12:34:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:44.459 12:34:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:44.459 12:34:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:44.459 12:34:53 -- scripts/common.sh@15 -- # local i 00:13:44.459 12:34:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:44.459 12:34:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:44.459 12:34:53 -- scripts/common.sh@24 -- # return 0 00:13:44.459 12:34:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:44.459 12:34:53 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:44.459 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.459 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.459 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:44.459 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.459 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.460 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:44.460 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.460 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.461 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:44.461 12:34:53 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.461 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:44.462 12:34:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:44.462 12:34:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:44.462 12:34:53 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:44.462 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.462 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:44.462 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.462 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.462 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:44.463 12:34:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:44.463 12:34:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:44.463 12:34:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:44.463 12:34:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:44.463 12:34:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:44.463 12:34:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:44.463 12:34:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:44.463 12:34:53 -- scripts/common.sh@15 -- # local i 00:13:44.463 12:34:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:44.463 12:34:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:44.463 12:34:53 -- scripts/common.sh@24 -- # return 0 00:13:44.463 12:34:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:44.463 12:34:53 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:44.463 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.463 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.463 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:44.463 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.463 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.464 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:44.464 12:34:53 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:44.464 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.465 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.465 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.465 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:44.725 12:34:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:44.725 12:34:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:44.725 12:34:53 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:44.725 12:34:53 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@18 -- # shift 00:13:44.725 12:34:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:44.725 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.725 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.725 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:44.726 12:34:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:44.726 12:34:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.726 12:34:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:44.726 12:34:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:44.726 12:34:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:44.726 12:34:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:44.726 12:34:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:44.726 12:34:53 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:44.726 12:34:53 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:44.726 12:34:53 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:44.726 12:34:53 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:44.726 12:34:53 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:44.726 12:34:53 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:44.726 12:34:53 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:44.726 12:34:53 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:44.727 12:34:53 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:44.727 12:34:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:44.727 12:34:53 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:44.727 12:34:53 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:44.727 12:34:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:44.727 12:34:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:44.727 12:34:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:44.727 12:34:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@76 -- # echo 0x88010 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:44.727 12:34:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:44.727 12:34:53 -- nvme/functions.sh@197 -- # echo nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:44.727 12:34:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:44.727 12:34:53 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:44.727 12:34:53 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:44.727 12:34:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:44.727 12:34:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:44.727 12:34:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:44.727 12:34:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:44.727 12:34:53 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:44.727 12:34:53 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:44.727 12:34:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:44.727 12:34:53 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:44.727 12:34:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:44.727 12:34:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:44.727 12:34:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:44.727 12:34:53 -- nvme/functions.sh@204 -- # trap - ERR 00:13:44.727 12:34:53 -- nvme/functions.sh@204 -- # print_backtrace 00:13:44.727 12:34:53 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:44.727 12:34:53 -- common/autotest_common.sh@1132 -- # return 0 00:13:44.727 12:34:53 -- nvme/functions.sh@204 -- # trap - ERR 00:13:44.727 12:34:53 -- nvme/functions.sh@204 -- # print_backtrace 00:13:44.727 12:34:53 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:44.727 12:34:53 -- common/autotest_common.sh@1132 -- # return 0 00:13:44.727 12:34:53 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:44.727 12:34:53 -- nvme/functions.sh@206 -- # echo nvme0 00:13:44.727 12:34:53 -- nvme/functions.sh@207 -- # return 0 00:13:44.727 12:34:53 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:13:44.727 12:34:53 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:13:44.727 12:34:53 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.658 lsblk: /dev/nvme0c0n1: not a block device 00:13:45.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.917 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.917 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.917 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.917 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.917 12:34:54 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:45.917 12:34:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:45.917 12:34:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.917 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:13:45.917 ************************************ 00:13:45.917 START TEST nvme_flexible_data_placement 00:13:45.917 ************************************ 00:13:45.917 12:34:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:46.174 Initializing NVMe Controllers 00:13:46.174 Attaching to 0000:00:09.0 00:13:46.174 Controller supports FDP Attached to 0000:00:09.0 00:13:46.174 Namespace ID: 1 Endurance Group ID: 1 00:13:46.174 Initialization complete. 00:13:46.174 00:13:46.174 ================================== 00:13:46.174 == FDP tests for Namespace: #01 == 00:13:46.174 ================================== 00:13:46.174 00:13:46.174 Get Feature: FDP: 00:13:46.174 ================= 00:13:46.174 Enabled: Yes 00:13:46.174 FDP configuration Index: 0 00:13:46.174 00:13:46.174 FDP configurations log page 00:13:46.174 =========================== 00:13:46.174 Number of FDP configurations: 1 00:13:46.174 Version: 0 00:13:46.174 Size: 112 00:13:46.174 FDP Configuration Descriptor: 0 00:13:46.174 Descriptor Size: 96 00:13:46.174 Reclaim Group Identifier format: 2 00:13:46.174 FDP Volatile Write Cache: Not Present 00:13:46.175 FDP Configuration: Valid 00:13:46.175 Vendor Specific Size: 0 00:13:46.175 Number of Reclaim Groups: 2 00:13:46.175 Number of Recalim Unit Handles: 8 00:13:46.175 Max Placement Identifiers: 128 00:13:46.175 Number of Namespaces Suppprted: 256 00:13:46.175 Reclaim unit Nominal Size: 6000000 bytes 00:13:46.175 Estimated Reclaim Unit Time Limit: Not Reported 00:13:46.175 RUH Desc #000: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #001: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #002: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #003: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #004: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #005: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #006: RUH Type: Initially Isolated 00:13:46.175 RUH Desc #007: RUH Type: Initially Isolated 00:13:46.175 00:13:46.175 FDP reclaim unit handle usage log page 00:13:46.175 ====================================== 00:13:46.175 Number of Reclaim Unit Handles: 8 00:13:46.175 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:46.175 RUH Usage Desc #001: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #002: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #003: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #004: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #005: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #006: RUH Attributes: Unused 00:13:46.175 RUH Usage Desc #007: RUH Attributes: Unused 00:13:46.175 00:13:46.175 FDP statistics log page 00:13:46.175 ======================= 00:13:46.175 Host bytes with metadata written: 738787328 00:13:46.175 Media bytes with metadata written: 738951168 00:13:46.175 Media bytes erased: 0 00:13:46.175 00:13:46.175 FDP Reclaim unit handle status 00:13:46.175 ============================== 00:13:46.175 Number of RUHS descriptors: 2 00:13:46.175 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003f70 00:13:46.175 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:46.175 00:13:46.175 FDP write on placement id: 0 success 00:13:46.175 00:13:46.175 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:46.175 00:13:46.175 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:46.175 00:13:46.175 Get Feature: FDP Events for Placement handle: #0 00:13:46.175 ======================== 00:13:46.175 Number of FDP Events: 6 00:13:46.175 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:46.175 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:46.175 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:46.175 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:46.175 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:46.175 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:46.175 00:13:46.175 FDP events log page 00:13:46.175 =================== 00:13:46.175 Number of FDP events: 1 00:13:46.175 FDP Event #0: 00:13:46.175 Event Type: RU Not Written to Capacity 00:13:46.175 Placement Identifier: Valid 00:13:46.175 NSID: Valid 00:13:46.175 Location: Valid 00:13:46.175 Placement Identifier: 0 00:13:46.175 Event Timestamp: d 00:13:46.175 Namespace Identifier: 1 00:13:46.175 Reclaim Group Identifier: 0 00:13:46.175 Reclaim Unit Handle Identifier: 0 00:13:46.175 00:13:46.175 FDP test passed 00:13:46.175 00:13:46.175 real 0m0.271s 00:13:46.175 user 0m0.086s 00:13:46.175 sys 0m0.083s 00:13:46.175 12:34:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.175 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:13:46.175 ************************************ 00:13:46.175 END TEST nvme_flexible_data_placement 00:13:46.175 ************************************ 00:13:46.433 00:13:46.433 real 0m8.094s 00:13:46.433 user 0m1.275s 00:13:46.433 sys 0m1.831s 00:13:46.433 12:34:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.433 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 ************************************ 00:13:46.433 END TEST nvme_fdp 00:13:46.433 ************************************ 00:13:46.433 12:34:55 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:13:46.433 12:34:55 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:46.433 12:34:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:46.433 12:34:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:46.433 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 ************************************ 00:13:46.433 START TEST nvme_rpc 00:13:46.433 ************************************ 00:13:46.433 12:34:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:46.433 * Looking for test storage... 00:13:46.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:46.433 12:34:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:46.433 12:34:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:46.433 12:34:55 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:46.433 12:34:55 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:46.433 12:34:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:46.433 12:34:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:46.433 12:34:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:46.433 12:34:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:46.433 12:34:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:46.433 12:34:55 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:46.433 12:34:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:46.433 12:34:55 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68142 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:46.433 12:34:55 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68142 00:13:46.433 12:34:55 -- common/autotest_common.sh@819 -- # '[' -z 68142 ']' 00:13:46.433 12:34:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.433 12:34:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:46.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.433 12:34:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.433 12:34:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:46.433 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:13:46.691 [2024-05-15 12:34:55.515337] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:46.691 [2024-05-15 12:34:55.515534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68142 ] 00:13:46.691 [2024-05-15 12:34:55.695052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.258 [2024-05-15 12:34:56.008143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.258 [2024-05-15 12:34:56.008566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.258 [2024-05-15 12:34:56.008668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.193 12:34:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.193 12:34:57 -- common/autotest_common.sh@852 -- # return 0 00:13:48.193 12:34:57 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:13:48.759 Nvme0n1 00:13:48.759 12:34:57 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:48.759 12:34:57 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:49.017 request: 00:13:49.017 { 00:13:49.017 "filename": "non_existing_file", 00:13:49.017 "bdev_name": "Nvme0n1", 00:13:49.017 "method": "bdev_nvme_apply_firmware", 00:13:49.017 "req_id": 1 00:13:49.017 } 00:13:49.017 Got JSON-RPC error response 00:13:49.017 response: 00:13:49.017 { 00:13:49.017 "code": -32603, 00:13:49.017 "message": "open file failed." 00:13:49.017 } 00:13:49.017 12:34:57 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:49.017 12:34:57 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:49.017 12:34:57 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:49.275 12:34:58 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:49.275 12:34:58 -- nvme/nvme_rpc.sh@40 -- # killprocess 68142 00:13:49.275 12:34:58 -- common/autotest_common.sh@926 -- # '[' -z 68142 ']' 00:13:49.275 12:34:58 -- common/autotest_common.sh@930 -- # kill -0 68142 00:13:49.275 12:34:58 -- common/autotest_common.sh@931 -- # uname 00:13:49.275 12:34:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:49.275 12:34:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68142 00:13:49.275 12:34:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:49.275 12:34:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:49.275 killing process with pid 68142 00:13:49.275 12:34:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68142' 00:13:49.275 12:34:58 -- common/autotest_common.sh@945 -- # kill 68142 00:13:49.275 12:34:58 -- common/autotest_common.sh@950 -- # wait 68142 00:13:51.225 ************************************ 00:13:51.225 END TEST nvme_rpc 00:13:51.225 ************************************ 00:13:51.225 00:13:51.225 real 0m4.914s 00:13:51.225 user 0m9.354s 00:13:51.225 sys 0m0.762s 00:13:51.225 12:35:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.225 12:35:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.225 12:35:00 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:51.225 12:35:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:51.225 12:35:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.225 12:35:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.225 ************************************ 00:13:51.225 START TEST nvme_rpc_timeouts 00:13:51.225 ************************************ 00:13:51.225 12:35:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:51.484 * Looking for test storage... 00:13:51.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68226 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68226 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68249 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:51.484 12:35:00 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68249 00:13:51.484 12:35:00 -- common/autotest_common.sh@819 -- # '[' -z 68249 ']' 00:13:51.484 12:35:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.484 12:35:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.484 12:35:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.484 12:35:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.484 12:35:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.484 [2024-05-15 12:35:00.395881] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:51.484 [2024-05-15 12:35:00.396027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68249 ] 00:13:51.743 [2024-05-15 12:35:00.561624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.001 [2024-05-15 12:35:00.815242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.001 [2024-05-15 12:35:00.815645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.001 [2024-05-15 12:35:00.815658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.438 12:35:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.438 12:35:02 -- common/autotest_common.sh@852 -- # return 0 00:13:53.438 Checking default timeout settings: 00:13:53.438 12:35:02 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:53.438 12:35:02 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:53.438 Making settings changes with rpc: 00:13:53.438 12:35:02 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:53.438 12:35:02 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:53.695 Check default vs. modified settings: 00:13:53.695 12:35:02 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:53.695 12:35:02 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68226 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68226 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:53.952 Setting action_on_timeout is changed as expected. 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68226 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:53.952 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68226 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:53.953 Setting timeout_us is changed as expected. 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68226 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:53.953 12:35:02 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68226 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:54.210 Setting timeout_admin_us is changed as expected. 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68226 /tmp/settings_modified_68226 00:13:54.210 12:35:02 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68249 00:13:54.210 12:35:02 -- common/autotest_common.sh@926 -- # '[' -z 68249 ']' 00:13:54.210 12:35:02 -- common/autotest_common.sh@930 -- # kill -0 68249 00:13:54.210 12:35:02 -- common/autotest_common.sh@931 -- # uname 00:13:54.210 12:35:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:54.210 12:35:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68249 00:13:54.210 12:35:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:54.210 killing process with pid 68249 00:13:54.210 12:35:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:54.210 12:35:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68249' 00:13:54.210 12:35:02 -- common/autotest_common.sh@945 -- # kill 68249 00:13:54.210 12:35:02 -- common/autotest_common.sh@950 -- # wait 68249 00:13:56.738 RPC TIMEOUT SETTING TEST PASSED. 00:13:56.738 12:35:05 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:56.738 ************************************ 00:13:56.738 END TEST nvme_rpc_timeouts 00:13:56.738 ************************************ 00:13:56.738 00:13:56.738 real 0m5.059s 00:13:56.738 user 0m9.697s 00:13:56.738 sys 0m0.753s 00:13:56.738 12:35:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.738 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.738 12:35:05 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:13:56.738 12:35:05 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:13:56.738 12:35:05 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:56.738 12:35:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:56.738 12:35:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.738 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.738 ************************************ 00:13:56.738 START TEST nvme_xnvme 00:13:56.738 ************************************ 00:13:56.738 12:35:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:56.738 * Looking for test storage... 00:13:56.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:56.738 12:35:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.738 12:35:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.738 12:35:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.738 12:35:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.738 12:35:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.738 12:35:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.738 12:35:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.738 12:35:05 -- paths/export.sh@5 -- # export PATH 00:13:56.738 12:35:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.738 12:35:05 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:56.738 12:35:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:56.738 12:35:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.738 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.738 ************************************ 00:13:56.738 START TEST xnvme_to_malloc_dd_copy 00:13:56.738 ************************************ 00:13:56.738 12:35:05 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:13:56.738 12:35:05 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:56.739 12:35:05 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:56.739 12:35:05 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:56.739 12:35:05 -- dd/common.sh@191 -- # return 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@18 -- # local io 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:56.739 12:35:05 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:56.739 12:35:05 -- dd/common.sh@31 -- # xtrace_disable 00:13:56.739 12:35:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.739 { 00:13:56.739 "subsystems": [ 00:13:56.739 { 00:13:56.739 "subsystem": "bdev", 00:13:56.739 "config": [ 00:13:56.739 { 00:13:56.739 "params": { 00:13:56.739 "block_size": 512, 00:13:56.739 "num_blocks": 2097152, 00:13:56.739 "name": "malloc0" 00:13:56.739 }, 00:13:56.739 "method": "bdev_malloc_create" 00:13:56.739 }, 00:13:56.739 { 00:13:56.739 "params": { 00:13:56.739 "io_mechanism": "libaio", 00:13:56.739 "filename": "/dev/nullb0", 00:13:56.739 "name": "null0" 00:13:56.739 }, 00:13:56.739 "method": "bdev_xnvme_create" 00:13:56.739 }, 00:13:56.739 { 00:13:56.739 "method": "bdev_wait_for_examine" 00:13:56.739 } 00:13:56.739 ] 00:13:56.739 } 00:13:56.739 ] 00:13:56.739 } 00:13:56.739 [2024-05-15 12:35:05.516931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:56.739 [2024-05-15 12:35:05.517100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68394 ] 00:13:56.739 [2024-05-15 12:35:05.684554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.998 [2024-05-15 12:35:05.964047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.900  Copying: 167/1024 [MB] (167 MBps) Copying: 329/1024 [MB] (162 MBps) Copying: 492/1024 [MB] (162 MBps) Copying: 663/1024 [MB] (171 MBps) Copying: 832/1024 [MB] (168 MBps) Copying: 1005/1024 [MB] (172 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:14:08.900 00:14:08.900 12:35:17 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:08.900 12:35:17 -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:08.900 12:35:17 -- dd/common.sh@31 -- # xtrace_disable 00:14:08.900 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 { 00:14:08.900 "subsystems": [ 00:14:08.900 { 00:14:08.900 "subsystem": "bdev", 00:14:08.900 "config": [ 00:14:08.900 { 00:14:08.900 "params": { 00:14:08.900 "block_size": 512, 00:14:08.900 "num_blocks": 2097152, 00:14:08.900 "name": "malloc0" 00:14:08.900 }, 00:14:08.900 "method": "bdev_malloc_create" 00:14:08.900 }, 00:14:08.900 { 00:14:08.900 "params": { 00:14:08.900 "io_mechanism": "libaio", 00:14:08.900 "filename": "/dev/nullb0", 00:14:08.900 "name": "null0" 00:14:08.900 }, 00:14:08.900 "method": "bdev_xnvme_create" 00:14:08.900 }, 00:14:08.900 { 00:14:08.900 "method": "bdev_wait_for_examine" 00:14:08.900 } 00:14:08.900 ] 00:14:08.900 } 00:14:08.900 ] 00:14:08.900 } 00:14:08.900 [2024-05-15 12:35:17.265908] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:08.900 [2024-05-15 12:35:17.266078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68526 ] 00:14:08.900 [2024-05-15 12:35:17.441741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.900 [2024-05-15 12:35:17.713155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.726  Copying: 169/1024 [MB] (169 MBps) Copying: 317/1024 [MB] (147 MBps) Copying: 471/1024 [MB] (154 MBps) Copying: 635/1024 [MB] (163 MBps) Copying: 803/1024 [MB] (167 MBps) Copying: 975/1024 [MB] (172 MBps) Copying: 1024/1024 [MB] (average 162 MBps) 00:14:20.726 00:14:20.726 12:35:29 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:20.726 12:35:29 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:20.726 12:35:29 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:20.726 12:35:29 -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:20.726 12:35:29 -- dd/common.sh@31 -- # xtrace_disable 00:14:20.726 12:35:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.726 { 00:14:20.726 "subsystems": [ 00:14:20.726 { 00:14:20.726 "subsystem": "bdev", 00:14:20.726 "config": [ 00:14:20.726 { 00:14:20.726 "params": { 00:14:20.726 "block_size": 512, 00:14:20.726 "num_blocks": 2097152, 00:14:20.726 "name": "malloc0" 00:14:20.726 }, 00:14:20.726 "method": "bdev_malloc_create" 00:14:20.726 }, 00:14:20.726 { 00:14:20.726 "params": { 00:14:20.726 "io_mechanism": "io_uring", 00:14:20.726 "filename": "/dev/nullb0", 00:14:20.726 "name": "null0" 00:14:20.726 }, 00:14:20.726 "method": "bdev_xnvme_create" 00:14:20.726 }, 00:14:20.726 { 00:14:20.726 "method": "bdev_wait_for_examine" 00:14:20.726 } 00:14:20.726 ] 00:14:20.726 } 00:14:20.726 ] 00:14:20.726 } 00:14:20.726 [2024-05-15 12:35:29.720604] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:20.726 [2024-05-15 12:35:29.720787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68663 ] 00:14:20.985 [2024-05-15 12:35:29.892198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.243 [2024-05-15 12:35:30.192251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.268  Copying: 163/1024 [MB] (163 MBps) Copying: 325/1024 [MB] (162 MBps) Copying: 496/1024 [MB] (170 MBps) Copying: 670/1024 [MB] (173 MBps) Copying: 835/1024 [MB] (164 MBps) Copying: 1004/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:14:33.268 00:14:33.268 12:35:41 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:33.268 12:35:41 -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:33.268 12:35:41 -- dd/common.sh@31 -- # xtrace_disable 00:14:33.268 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.268 { 00:14:33.268 "subsystems": [ 00:14:33.268 { 00:14:33.268 "subsystem": "bdev", 00:14:33.268 "config": [ 00:14:33.268 { 00:14:33.268 "params": { 00:14:33.268 "block_size": 512, 00:14:33.268 "num_blocks": 2097152, 00:14:33.268 "name": "malloc0" 00:14:33.268 }, 00:14:33.268 "method": "bdev_malloc_create" 00:14:33.268 }, 00:14:33.268 { 00:14:33.268 "params": { 00:14:33.268 "io_mechanism": "io_uring", 00:14:33.268 "filename": "/dev/nullb0", 00:14:33.268 "name": "null0" 00:14:33.268 }, 00:14:33.268 "method": "bdev_xnvme_create" 00:14:33.268 }, 00:14:33.268 { 00:14:33.268 "method": "bdev_wait_for_examine" 00:14:33.268 } 00:14:33.268 ] 00:14:33.268 } 00:14:33.268 ] 00:14:33.268 } 00:14:33.268 [2024-05-15 12:35:41.825458] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:33.268 [2024-05-15 12:35:41.825661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68801 ] 00:14:33.268 [2024-05-15 12:35:41.998513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.268 [2024-05-15 12:35:42.235656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.539  Copying: 188/1024 [MB] (188 MBps) Copying: 370/1024 [MB] (181 MBps) Copying: 555/1024 [MB] (185 MBps) Copying: 740/1024 [MB] (185 MBps) Copying: 929/1024 [MB] (188 MBps) Copying: 1024/1024 [MB] (average 185 MBps) 00:14:44.539 00:14:44.539 12:35:52 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:44.539 12:35:52 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:44.539 00:14:44.539 real 0m47.602s 00:14:44.539 user 0m40.927s 00:14:44.539 sys 0m5.998s 00:14:44.539 12:35:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.539 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:44.539 ************************************ 00:14:44.539 END TEST xnvme_to_malloc_dd_copy 00:14:44.539 ************************************ 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:44.539 12:35:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:44.539 12:35:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.539 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:44.539 ************************************ 00:14:44.539 START TEST xnvme_bdevperf 00:14:44.539 ************************************ 00:14:44.539 12:35:53 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:44.539 12:35:53 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:44.539 12:35:53 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:44.539 12:35:53 -- dd/common.sh@191 -- # return 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@60 -- # local io 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:44.539 12:35:53 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:44.539 12:35:53 -- dd/common.sh@31 -- # xtrace_disable 00:14:44.539 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:44.539 { 00:14:44.539 "subsystems": [ 00:14:44.539 { 00:14:44.539 "subsystem": "bdev", 00:14:44.540 "config": [ 00:14:44.540 { 00:14:44.540 "params": { 00:14:44.540 "io_mechanism": "libaio", 00:14:44.540 "filename": "/dev/nullb0", 00:14:44.540 "name": "null0" 00:14:44.540 }, 00:14:44.540 "method": "bdev_xnvme_create" 00:14:44.540 }, 00:14:44.540 { 00:14:44.540 "method": "bdev_wait_for_examine" 00:14:44.540 } 00:14:44.540 ] 00:14:44.540 } 00:14:44.540 ] 00:14:44.540 } 00:14:44.540 [2024-05-15 12:35:53.189144] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:44.540 [2024-05-15 12:35:53.189326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68950 ] 00:14:44.540 [2024-05-15 12:35:53.363161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.798 [2024-05-15 12:35:53.623809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.057 Running I/O for 5 seconds... 00:14:50.327 00:14:50.327 Latency(us) 00:14:50.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.327 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:50.327 null0 : 5.00 123320.03 481.72 0.00 0.00 515.79 189.91 1087.30 00:14:50.327 =================================================================================================================== 00:14:50.327 Total : 123320.03 481.72 0.00 0.00 515.79 189.91 1087.30 00:14:51.263 12:36:00 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:51.263 12:36:00 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:51.263 12:36:00 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:51.263 12:36:00 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:51.263 12:36:00 -- dd/common.sh@31 -- # xtrace_disable 00:14:51.263 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:14:51.263 { 00:14:51.263 "subsystems": [ 00:14:51.263 { 00:14:51.263 "subsystem": "bdev", 00:14:51.263 "config": [ 00:14:51.263 { 00:14:51.263 "params": { 00:14:51.263 "io_mechanism": "io_uring", 00:14:51.263 "filename": "/dev/nullb0", 00:14:51.263 "name": "null0" 00:14:51.263 }, 00:14:51.263 "method": "bdev_xnvme_create" 00:14:51.263 }, 00:14:51.263 { 00:14:51.263 "method": "bdev_wait_for_examine" 00:14:51.263 } 00:14:51.263 ] 00:14:51.263 } 00:14:51.263 ] 00:14:51.263 } 00:14:51.263 [2024-05-15 12:36:00.238155] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:51.263 [2024-05-15 12:36:00.238354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:14:51.520 [2024-05-15 12:36:00.415223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.778 [2024-05-15 12:36:00.653867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.037 Running I/O for 5 seconds... 00:14:57.303 00:14:57.303 Latency(us) 00:14:57.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.303 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:57.303 null0 : 5.00 158972.54 620.99 0.00 0.00 399.50 235.52 3470.43 00:14:57.303 =================================================================================================================== 00:14:57.303 Total : 158972.54 620.99 0.00 0.00 399.50 235.52 3470.43 00:14:58.240 12:36:07 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:58.240 12:36:07 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:58.240 00:14:58.240 real 0m14.089s 00:14:58.240 user 0m10.957s 00:14:58.240 sys 0m2.879s 00:14:58.240 12:36:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.240 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:14:58.240 ************************************ 00:14:58.240 END TEST xnvme_bdevperf 00:14:58.240 ************************************ 00:14:58.240 00:14:58.240 real 1m1.875s 00:14:58.240 user 0m51.958s 00:14:58.240 sys 0m8.981s 00:14:58.240 12:36:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.240 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:14:58.240 ************************************ 00:14:58.240 END TEST nvme_xnvme 00:14:58.240 ************************************ 00:14:58.240 12:36:07 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:58.240 12:36:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:58.240 12:36:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:58.240 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:14:58.240 ************************************ 00:14:58.240 START TEST blockdev_xnvme 00:14:58.240 ************************************ 00:14:58.240 12:36:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:58.499 * Looking for test storage... 00:14:58.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:58.499 12:36:07 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:58.499 12:36:07 -- bdev/nbd_common.sh@6 -- # set -e 00:14:58.499 12:36:07 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:58.499 12:36:07 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:58.499 12:36:07 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:58.499 12:36:07 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:58.499 12:36:07 -- bdev/blockdev.sh@18 -- # : 00:14:58.499 12:36:07 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:14:58.499 12:36:07 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:14:58.499 12:36:07 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:14:58.499 12:36:07 -- bdev/blockdev.sh@672 -- # uname -s 00:14:58.499 12:36:07 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:14:58.499 12:36:07 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:14:58.499 12:36:07 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:14:58.499 12:36:07 -- bdev/blockdev.sh@681 -- # crypto_device= 00:14:58.499 12:36:07 -- bdev/blockdev.sh@682 -- # dek= 00:14:58.499 12:36:07 -- bdev/blockdev.sh@683 -- # env_ctx= 00:14:58.499 12:36:07 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:14:58.499 12:36:07 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:14:58.499 12:36:07 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:14:58.499 12:36:07 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:14:58.499 12:36:07 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:14:58.499 12:36:07 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=69171 00:14:58.499 12:36:07 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:58.499 12:36:07 -- bdev/blockdev.sh@47 -- # waitforlisten 69171 00:14:58.499 12:36:07 -- common/autotest_common.sh@819 -- # '[' -z 69171 ']' 00:14:58.499 12:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.499 12:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.499 12:36:07 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.499 12:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.499 12:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.499 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:14:58.499 [2024-05-15 12:36:07.467563] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:58.499 [2024-05-15 12:36:07.467764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69171 ] 00:14:58.757 [2024-05-15 12:36:07.651476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.016 [2024-05-15 12:36:07.888367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.016 [2024-05-15 12:36:07.888643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.392 12:36:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:00.392 12:36:09 -- common/autotest_common.sh@852 -- # return 0 00:15:00.392 12:36:09 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:15:00.392 12:36:09 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:15:00.392 12:36:09 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:15:00.392 12:36:09 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:15:00.392 12:36:09 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:00.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:00.651 Waiting for block devices as requested 00:15:00.910 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:15:00.910 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:15:00.910 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.168 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:15:06.458 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:15:06.458 12:36:15 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:15:06.458 12:36:15 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:15:06.458 12:36:15 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:15:06.458 12:36:15 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:15:06.458 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.458 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.458 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.458 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:15:06.458 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.458 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:15:06.458 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:15:06.458 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.458 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:15:06.458 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:15:06.458 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:06.458 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.459 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.459 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:15:06.459 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:15:06.459 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:06.459 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.459 12:36:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:15:06.459 12:36:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:15:06.459 12:36:15 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:15:06.459 12:36:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:06.459 12:36:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:15:06.459 12:36:15 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:06.459 nvme0n1 00:15:06.459 nvme1n1 00:15:06.459 nvme1n2 00:15:06.459 nvme1n3 00:15:06.459 nvme2n1 00:15:06.459 nvme3n1 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@738 -- # cat 00:15:06.459 12:36:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:15:06.459 12:36:15 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:15:06.459 12:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.459 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:06.459 12:36:15 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:15:06.459 12:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.459 12:36:15 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:15:06.459 12:36:15 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "162eff18-d709-4f73-89ec-60d7f1286d54"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "162eff18-d709-4f73-89ec-60d7f1286d54",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "712a06a5-d794-4d38-bd13-6d0fe5a30192"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "712a06a5-d794-4d38-bd13-6d0fe5a30192",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "5eb8a8cd-7d42-4524-91ad-a33455b04a53"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb8a8cd-7d42-4524-91ad-a33455b04a53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "be6558cf-5813-4d11-8baa-97c343939ec2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "be6558cf-5813-4d11-8baa-97c343939ec2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "716a22a8-7195-47a8-9d25-e789ed316d69"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "716a22a8-7195-47a8-9d25-e789ed316d69",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e698648-6038-4ab9-9e1d-2022eb39ecad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6e698648-6038-4ab9-9e1d-2022eb39ecad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:06.459 12:36:15 -- bdev/blockdev.sh@747 -- # jq -r .name 00:15:06.459 12:36:15 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:15:06.459 12:36:15 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:15:06.459 12:36:15 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:15:06.459 12:36:15 -- bdev/blockdev.sh@752 -- # killprocess 69171 00:15:06.459 12:36:15 -- common/autotest_common.sh@926 -- # '[' -z 69171 ']' 00:15:06.459 12:36:15 -- common/autotest_common.sh@930 -- # kill -0 69171 00:15:06.459 12:36:15 -- common/autotest_common.sh@931 -- # uname 00:15:06.459 12:36:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.459 12:36:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69171 00:15:06.459 12:36:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.459 12:36:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.459 killing process with pid 69171 00:15:06.459 12:36:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69171' 00:15:06.459 12:36:15 -- common/autotest_common.sh@945 -- # kill 69171 00:15:06.459 12:36:15 -- common/autotest_common.sh@950 -- # wait 69171 00:15:08.988 12:36:17 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:08.988 12:36:17 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:08.988 12:36:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:15:08.988 12:36:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.988 12:36:17 -- common/autotest_common.sh@10 -- # set +x 00:15:08.988 ************************************ 00:15:08.988 START TEST bdev_hello_world 00:15:08.988 ************************************ 00:15:08.989 12:36:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:08.989 [2024-05-15 12:36:17.571326] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:08.989 [2024-05-15 12:36:17.571478] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69567 ] 00:15:08.989 [2024-05-15 12:36:17.736931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.989 [2024-05-15 12:36:17.977341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.554 [2024-05-15 12:36:18.402411] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:09.554 [2024-05-15 12:36:18.402478] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:09.554 [2024-05-15 12:36:18.402514] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:09.554 [2024-05-15 12:36:18.404860] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:09.554 [2024-05-15 12:36:18.405218] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:09.554 [2024-05-15 12:36:18.405256] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:09.554 [2024-05-15 12:36:18.405478] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:09.554 00:15:09.554 [2024-05-15 12:36:18.405533] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:10.927 00:15:10.927 real 0m2.063s 00:15:10.927 user 0m1.703s 00:15:10.927 sys 0m0.244s 00:15:10.927 12:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.927 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:15:10.927 ************************************ 00:15:10.927 END TEST bdev_hello_world 00:15:10.927 ************************************ 00:15:10.927 12:36:19 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:15:10.927 12:36:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:10.927 12:36:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.927 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:15:10.927 ************************************ 00:15:10.927 START TEST bdev_bounds 00:15:10.927 ************************************ 00:15:10.927 12:36:19 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:15:10.927 12:36:19 -- bdev/blockdev.sh@288 -- # bdevio_pid=69609 00:15:10.927 12:36:19 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:10.927 Process bdevio pid: 69609 00:15:10.927 12:36:19 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:10.927 12:36:19 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69609' 00:15:10.927 12:36:19 -- bdev/blockdev.sh@291 -- # waitforlisten 69609 00:15:10.927 12:36:19 -- common/autotest_common.sh@819 -- # '[' -z 69609 ']' 00:15:10.927 12:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.927 12:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.927 12:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.927 12:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.927 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:15:10.927 [2024-05-15 12:36:19.691074] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.927 [2024-05-15 12:36:19.691268] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69609 ] 00:15:10.927 [2024-05-15 12:36:19.856469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.185 [2024-05-15 12:36:20.113080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.185 [2024-05-15 12:36:20.113210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.185 [2024-05-15 12:36:20.113228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.558 12:36:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.558 12:36:21 -- common/autotest_common.sh@852 -- # return 0 00:15:12.558 12:36:21 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:12.558 I/O targets: 00:15:12.558 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:12.558 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.558 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.558 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.558 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:12.558 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:12.558 00:15:12.558 00:15:12.558 CUnit - A unit testing framework for C - Version 2.1-3 00:15:12.558 http://cunit.sourceforge.net/ 00:15:12.558 00:15:12.558 00:15:12.558 Suite: bdevio tests on: nvme3n1 00:15:12.558 Test: blockdev write read block ...passed 00:15:12.558 Test: blockdev write zeroes read block ...passed 00:15:12.558 Test: blockdev write zeroes read no split ...passed 00:15:12.558 Test: blockdev write zeroes read split ...passed 00:15:12.558 Test: blockdev write zeroes read split partial ...passed 00:15:12.558 Test: blockdev reset ...passed 00:15:12.558 Test: blockdev write read 8 blocks ...passed 00:15:12.558 Test: blockdev write read size > 128k ...passed 00:15:12.558 Test: blockdev write read invalid size ...passed 00:15:12.558 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.558 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.558 Test: blockdev write read max offset ...passed 00:15:12.558 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.558 Test: blockdev writev readv 8 blocks ...passed 00:15:12.558 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.558 Test: blockdev writev readv block ...passed 00:15:12.558 Test: blockdev writev readv size > 128k ...passed 00:15:12.558 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.558 Test: blockdev comparev and writev ...passed 00:15:12.558 Test: blockdev nvme passthru rw ...passed 00:15:12.558 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.558 Test: blockdev nvme admin passthru ...passed 00:15:12.558 Test: blockdev copy ...passed 00:15:12.558 Suite: bdevio tests on: nvme2n1 00:15:12.558 Test: blockdev write read block ...passed 00:15:12.558 Test: blockdev write zeroes read block ...passed 00:15:12.558 Test: blockdev write zeroes read no split ...passed 00:15:12.558 Test: blockdev write zeroes read split ...passed 00:15:12.558 Test: blockdev write zeroes read split partial ...passed 00:15:12.558 Test: blockdev reset ...passed 00:15:12.558 Test: blockdev write read 8 blocks ...passed 00:15:12.558 Test: blockdev write read size > 128k ...passed 00:15:12.558 Test: blockdev write read invalid size ...passed 00:15:12.558 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.558 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.558 Test: blockdev write read max offset ...passed 00:15:12.558 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.558 Test: blockdev writev readv 8 blocks ...passed 00:15:12.558 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.558 Test: blockdev writev readv block ...passed 00:15:12.558 Test: blockdev writev readv size > 128k ...passed 00:15:12.558 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.558 Test: blockdev comparev and writev ...passed 00:15:12.558 Test: blockdev nvme passthru rw ...passed 00:15:12.558 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.558 Test: blockdev nvme admin passthru ...passed 00:15:12.558 Test: blockdev copy ...passed 00:15:12.558 Suite: bdevio tests on: nvme1n3 00:15:12.558 Test: blockdev write read block ...passed 00:15:12.558 Test: blockdev write zeroes read block ...passed 00:15:12.558 Test: blockdev write zeroes read no split ...passed 00:15:12.816 Test: blockdev write zeroes read split ...passed 00:15:12.816 Test: blockdev write zeroes read split partial ...passed 00:15:12.816 Test: blockdev reset ...passed 00:15:12.816 Test: blockdev write read 8 blocks ...passed 00:15:12.816 Test: blockdev write read size > 128k ...passed 00:15:12.816 Test: blockdev write read invalid size ...passed 00:15:12.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.816 Test: blockdev write read max offset ...passed 00:15:12.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.816 Test: blockdev writev readv 8 blocks ...passed 00:15:12.816 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.816 Test: blockdev writev readv block ...passed 00:15:12.816 Test: blockdev writev readv size > 128k ...passed 00:15:12.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.816 Test: blockdev comparev and writev ...passed 00:15:12.816 Test: blockdev nvme passthru rw ...passed 00:15:12.816 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.816 Test: blockdev nvme admin passthru ...passed 00:15:12.816 Test: blockdev copy ...passed 00:15:12.816 Suite: bdevio tests on: nvme1n2 00:15:12.816 Test: blockdev write read block ...passed 00:15:12.816 Test: blockdev write zeroes read block ...passed 00:15:12.816 Test: blockdev write zeroes read no split ...passed 00:15:12.816 Test: blockdev write zeroes read split ...passed 00:15:12.816 Test: blockdev write zeroes read split partial ...passed 00:15:12.816 Test: blockdev reset ...passed 00:15:12.816 Test: blockdev write read 8 blocks ...passed 00:15:12.816 Test: blockdev write read size > 128k ...passed 00:15:12.816 Test: blockdev write read invalid size ...passed 00:15:12.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.816 Test: blockdev write read max offset ...passed 00:15:12.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.816 Test: blockdev writev readv 8 blocks ...passed 00:15:12.816 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.816 Test: blockdev writev readv block ...passed 00:15:12.816 Test: blockdev writev readv size > 128k ...passed 00:15:12.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.816 Test: blockdev comparev and writev ...passed 00:15:12.816 Test: blockdev nvme passthru rw ...passed 00:15:12.816 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.816 Test: blockdev nvme admin passthru ...passed 00:15:12.816 Test: blockdev copy ...passed 00:15:12.816 Suite: bdevio tests on: nvme1n1 00:15:12.816 Test: blockdev write read block ...passed 00:15:12.816 Test: blockdev write zeroes read block ...passed 00:15:12.816 Test: blockdev write zeroes read no split ...passed 00:15:12.816 Test: blockdev write zeroes read split ...passed 00:15:12.816 Test: blockdev write zeroes read split partial ...passed 00:15:12.816 Test: blockdev reset ...passed 00:15:12.816 Test: blockdev write read 8 blocks ...passed 00:15:12.816 Test: blockdev write read size > 128k ...passed 00:15:12.816 Test: blockdev write read invalid size ...passed 00:15:12.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.816 Test: blockdev write read max offset ...passed 00:15:12.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.816 Test: blockdev writev readv 8 blocks ...passed 00:15:12.816 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.816 Test: blockdev writev readv block ...passed 00:15:12.816 Test: blockdev writev readv size > 128k ...passed 00:15:12.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.816 Test: blockdev comparev and writev ...passed 00:15:12.816 Test: blockdev nvme passthru rw ...passed 00:15:12.816 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.816 Test: blockdev nvme admin passthru ...passed 00:15:12.816 Test: blockdev copy ...passed 00:15:12.816 Suite: bdevio tests on: nvme0n1 00:15:12.816 Test: blockdev write read block ...passed 00:15:12.816 Test: blockdev write zeroes read block ...passed 00:15:12.816 Test: blockdev write zeroes read no split ...passed 00:15:12.816 Test: blockdev write zeroes read split ...passed 00:15:13.074 Test: blockdev write zeroes read split partial ...passed 00:15:13.074 Test: blockdev reset ...passed 00:15:13.074 Test: blockdev write read 8 blocks ...passed 00:15:13.074 Test: blockdev write read size > 128k ...passed 00:15:13.074 Test: blockdev write read invalid size ...passed 00:15:13.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:13.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:13.074 Test: blockdev write read max offset ...passed 00:15:13.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:13.074 Test: blockdev writev readv 8 blocks ...passed 00:15:13.074 Test: blockdev writev readv 30 x 1block ...passed 00:15:13.074 Test: blockdev writev readv block ...passed 00:15:13.074 Test: blockdev writev readv size > 128k ...passed 00:15:13.074 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:13.074 Test: blockdev comparev and writev ...passed 00:15:13.074 Test: blockdev nvme passthru rw ...passed 00:15:13.074 Test: blockdev nvme passthru vendor specific ...passed 00:15:13.074 Test: blockdev nvme admin passthru ...passed 00:15:13.074 Test: blockdev copy ...passed 00:15:13.074 00:15:13.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:13.074 suites 6 6 n/a 0 0 00:15:13.074 tests 138 138 138 0 0 00:15:13.074 asserts 780 780 780 0 n/a 00:15:13.074 00:15:13.074 Elapsed time = 1.311 seconds 00:15:13.074 0 00:15:13.074 12:36:21 -- bdev/blockdev.sh@293 -- # killprocess 69609 00:15:13.074 12:36:21 -- common/autotest_common.sh@926 -- # '[' -z 69609 ']' 00:15:13.074 12:36:21 -- common/autotest_common.sh@930 -- # kill -0 69609 00:15:13.074 12:36:21 -- common/autotest_common.sh@931 -- # uname 00:15:13.074 12:36:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:13.074 12:36:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69609 00:15:13.074 12:36:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:13.074 12:36:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:13.074 killing process with pid 69609 00:15:13.074 12:36:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69609' 00:15:13.074 12:36:21 -- common/autotest_common.sh@945 -- # kill 69609 00:15:13.074 12:36:21 -- common/autotest_common.sh@950 -- # wait 69609 00:15:14.447 12:36:23 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:15:14.447 00:15:14.447 real 0m3.454s 00:15:14.447 user 0m8.691s 00:15:14.447 sys 0m0.436s 00:15:14.447 12:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.447 12:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:14.447 ************************************ 00:15:14.447 END TEST bdev_bounds 00:15:14.447 ************************************ 00:15:14.447 12:36:23 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:14.447 12:36:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:14.447 12:36:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.447 12:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:14.447 ************************************ 00:15:14.447 START TEST bdev_nbd 00:15:14.447 ************************************ 00:15:14.447 12:36:23 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:14.447 12:36:23 -- bdev/blockdev.sh@298 -- # uname -s 00:15:14.447 12:36:23 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:15:14.447 12:36:23 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.447 12:36:23 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:14.447 12:36:23 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:14.447 12:36:23 -- bdev/blockdev.sh@302 -- # local bdev_all 00:15:14.447 12:36:23 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:15:14.447 12:36:23 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:15:14.447 12:36:23 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:14.447 12:36:23 -- bdev/blockdev.sh@309 -- # local nbd_all 00:15:14.447 12:36:23 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:15:14.447 12:36:23 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:14.447 12:36:23 -- bdev/blockdev.sh@312 -- # local nbd_list 00:15:14.447 12:36:23 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:14.447 12:36:23 -- bdev/blockdev.sh@313 -- # local bdev_list 00:15:14.447 12:36:23 -- bdev/blockdev.sh@316 -- # nbd_pid=69684 00:15:14.447 12:36:23 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:14.447 12:36:23 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:14.447 12:36:23 -- bdev/blockdev.sh@318 -- # waitforlisten 69684 /var/tmp/spdk-nbd.sock 00:15:14.447 12:36:23 -- common/autotest_common.sh@819 -- # '[' -z 69684 ']' 00:15:14.447 12:36:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:14.447 12:36:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:14.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:14.447 12:36:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:14.447 12:36:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:14.447 12:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:14.447 [2024-05-15 12:36:23.215317] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:14.447 [2024-05-15 12:36:23.215457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.447 [2024-05-15 12:36:23.379884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.705 [2024-05-15 12:36:23.618206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.272 12:36:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:15.272 12:36:24 -- common/autotest_common.sh@852 -- # return 0 00:15:15.272 12:36:24 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@24 -- # local i 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.272 12:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:15.531 12:36:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:15.531 12:36:24 -- common/autotest_common.sh@857 -- # local i 00:15:15.531 12:36:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:15.531 12:36:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:15.531 12:36:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:15.531 12:36:24 -- common/autotest_common.sh@861 -- # break 00:15:15.531 12:36:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:15.531 12:36:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:15.531 12:36:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.531 1+0 records in 00:15:15.531 1+0 records out 00:15:15.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054319 s, 7.5 MB/s 00:15:15.531 12:36:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.531 12:36:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:15.531 12:36:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.531 12:36:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:15.531 12:36:24 -- common/autotest_common.sh@877 -- # return 0 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.531 12:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:15.789 12:36:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:15.789 12:36:24 -- common/autotest_common.sh@857 -- # local i 00:15:15.789 12:36:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:15.789 12:36:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:15.789 12:36:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:15.789 12:36:24 -- common/autotest_common.sh@861 -- # break 00:15:15.789 12:36:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:15.789 12:36:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:15.789 12:36:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.789 1+0 records in 00:15:15.789 1+0 records out 00:15:15.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432485 s, 9.5 MB/s 00:15:15.789 12:36:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.789 12:36:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:15.789 12:36:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.789 12:36:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:15.789 12:36:24 -- common/autotest_common.sh@877 -- # return 0 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.789 12:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:15:16.046 12:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:16.046 12:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:16.046 12:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:16.046 12:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:15:16.046 12:36:25 -- common/autotest_common.sh@857 -- # local i 00:15:16.046 12:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:16.046 12:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:16.046 12:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:15:16.046 12:36:25 -- common/autotest_common.sh@861 -- # break 00:15:16.046 12:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:16.046 12:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:16.046 12:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.046 1+0 records in 00:15:16.046 1+0 records out 00:15:16.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464954 s, 8.8 MB/s 00:15:16.046 12:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.046 12:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:16.046 12:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.046 12:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:16.046 12:36:25 -- common/autotest_common.sh@877 -- # return 0 00:15:16.046 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:16.046 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:16.046 12:36:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:15:16.323 12:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:16.323 12:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:16.323 12:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:16.323 12:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:15:16.323 12:36:25 -- common/autotest_common.sh@857 -- # local i 00:15:16.323 12:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:16.323 12:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:16.323 12:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:15:16.323 12:36:25 -- common/autotest_common.sh@861 -- # break 00:15:16.323 12:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:16.323 12:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:16.323 12:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.323 1+0 records in 00:15:16.323 1+0 records out 00:15:16.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489905 s, 8.4 MB/s 00:15:16.323 12:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.323 12:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:16.323 12:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.581 12:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:16.581 12:36:25 -- common/autotest_common.sh@877 -- # return 0 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:16.581 12:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:16.581 12:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:15:16.581 12:36:25 -- common/autotest_common.sh@857 -- # local i 00:15:16.581 12:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:16.581 12:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:16.581 12:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:15:16.581 12:36:25 -- common/autotest_common.sh@861 -- # break 00:15:16.581 12:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:16.581 12:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:16.581 12:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.581 1+0 records in 00:15:16.581 1+0 records out 00:15:16.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574695 s, 7.1 MB/s 00:15:16.581 12:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.581 12:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:16.581 12:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.840 12:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:16.840 12:36:25 -- common/autotest_common.sh@877 -- # return 0 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:16.840 12:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:15:16.840 12:36:25 -- common/autotest_common.sh@857 -- # local i 00:15:16.840 12:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:16.840 12:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:16.840 12:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:15:16.840 12:36:25 -- common/autotest_common.sh@861 -- # break 00:15:16.840 12:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:16.840 12:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:16.840 12:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.840 1+0 records in 00:15:16.840 1+0 records out 00:15:16.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703534 s, 5.8 MB/s 00:15:16.840 12:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.840 12:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:16.840 12:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.840 12:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:16.840 12:36:25 -- common/autotest_common.sh@877 -- # return 0 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:16.840 12:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:17.099 12:36:25 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.099 12:36:26 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd0", 00:15:17.099 "bdev_name": "nvme0n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd1", 00:15:17.099 "bdev_name": "nvme1n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd2", 00:15:17.099 "bdev_name": "nvme1n2" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd3", 00:15:17.099 "bdev_name": "nvme1n3" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd4", 00:15:17.099 "bdev_name": "nvme2n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd5", 00:15:17.099 "bdev_name": "nvme3n1" 00:15:17.099 } 00:15:17.099 ]' 00:15:17.099 12:36:26 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:17.099 12:36:26 -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd0", 00:15:17.099 "bdev_name": "nvme0n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd1", 00:15:17.099 "bdev_name": "nvme1n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd2", 00:15:17.099 "bdev_name": "nvme1n2" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd3", 00:15:17.099 "bdev_name": "nvme1n3" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd4", 00:15:17.099 "bdev_name": "nvme2n1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "nbd_device": "/dev/nbd5", 00:15:17.099 "bdev_name": "nvme3n1" 00:15:17.099 } 00:15:17.099 ]' 00:15:17.099 12:36:26 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@51 -- # local i 00:15:17.358 12:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.359 12:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@41 -- # break 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.617 12:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@41 -- # break 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.875 12:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@41 -- # break 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.133 12:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@41 -- # break 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.390 12:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@41 -- # break 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.648 12:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@41 -- # break 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.906 12:36:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@65 -- # true 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@65 -- # count=0 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@122 -- # count=0 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@127 -- # return 0 00:15:19.164 12:36:27 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.164 12:36:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@12 -- # local i 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:19.165 12:36:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:19.423 /dev/nbd0 00:15:19.423 12:36:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.423 12:36:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.423 12:36:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:19.423 12:36:28 -- common/autotest_common.sh@857 -- # local i 00:15:19.423 12:36:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:19.423 12:36:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:19.423 12:36:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:19.423 12:36:28 -- common/autotest_common.sh@861 -- # break 00:15:19.423 12:36:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:19.423 12:36:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:19.423 12:36:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.423 1+0 records in 00:15:19.423 1+0 records out 00:15:19.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440246 s, 9.3 MB/s 00:15:19.423 12:36:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.423 12:36:28 -- common/autotest_common.sh@874 -- # size=4096 00:15:19.423 12:36:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.423 12:36:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:19.423 12:36:28 -- common/autotest_common.sh@877 -- # return 0 00:15:19.423 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.423 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:19.423 12:36:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:19.682 /dev/nbd1 00:15:19.682 12:36:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.682 12:36:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.682 12:36:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:19.682 12:36:28 -- common/autotest_common.sh@857 -- # local i 00:15:19.682 12:36:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:19.682 12:36:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:19.682 12:36:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:19.682 12:36:28 -- common/autotest_common.sh@861 -- # break 00:15:19.682 12:36:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:19.682 12:36:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:19.682 12:36:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.682 1+0 records in 00:15:19.682 1+0 records out 00:15:19.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552662 s, 7.4 MB/s 00:15:19.682 12:36:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.682 12:36:28 -- common/autotest_common.sh@874 -- # size=4096 00:15:19.682 12:36:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.682 12:36:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:19.682 12:36:28 -- common/autotest_common.sh@877 -- # return 0 00:15:19.682 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.682 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:19.682 12:36:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:15:19.939 /dev/nbd10 00:15:19.939 12:36:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:19.939 12:36:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:19.939 12:36:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:15:19.939 12:36:28 -- common/autotest_common.sh@857 -- # local i 00:15:19.939 12:36:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:19.939 12:36:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:19.939 12:36:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:15:19.939 12:36:28 -- common/autotest_common.sh@861 -- # break 00:15:19.939 12:36:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:19.939 12:36:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:19.939 12:36:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.939 1+0 records in 00:15:19.939 1+0 records out 00:15:19.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520481 s, 7.9 MB/s 00:15:19.939 12:36:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.939 12:36:28 -- common/autotest_common.sh@874 -- # size=4096 00:15:19.939 12:36:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.939 12:36:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:19.939 12:36:28 -- common/autotest_common.sh@877 -- # return 0 00:15:19.939 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.939 12:36:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:19.939 12:36:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:15:20.197 /dev/nbd11 00:15:20.197 12:36:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:20.197 12:36:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:20.197 12:36:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:15:20.197 12:36:29 -- common/autotest_common.sh@857 -- # local i 00:15:20.197 12:36:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.197 12:36:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.197 12:36:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:15:20.197 12:36:29 -- common/autotest_common.sh@861 -- # break 00:15:20.197 12:36:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.197 12:36:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.197 12:36:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.197 1+0 records in 00:15:20.197 1+0 records out 00:15:20.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498854 s, 8.2 MB/s 00:15:20.197 12:36:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.197 12:36:29 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.197 12:36:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.197 12:36:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.197 12:36:29 -- common/autotest_common.sh@877 -- # return 0 00:15:20.197 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.197 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.197 12:36:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:20.454 /dev/nbd12 00:15:20.454 12:36:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:20.454 12:36:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:20.454 12:36:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:15:20.454 12:36:29 -- common/autotest_common.sh@857 -- # local i 00:15:20.454 12:36:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.454 12:36:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.454 12:36:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:15:20.454 12:36:29 -- common/autotest_common.sh@861 -- # break 00:15:20.454 12:36:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.455 12:36:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.455 12:36:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.455 1+0 records in 00:15:20.455 1+0 records out 00:15:20.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766671 s, 5.3 MB/s 00:15:20.455 12:36:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.455 12:36:29 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.455 12:36:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.455 12:36:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.455 12:36:29 -- common/autotest_common.sh@877 -- # return 0 00:15:20.455 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.455 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.455 12:36:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:20.712 /dev/nbd13 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:20.712 12:36:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:15:20.712 12:36:29 -- common/autotest_common.sh@857 -- # local i 00:15:20.712 12:36:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.712 12:36:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.712 12:36:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:15:20.712 12:36:29 -- common/autotest_common.sh@861 -- # break 00:15:20.712 12:36:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.712 12:36:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.712 12:36:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.712 1+0 records in 00:15:20.712 1+0 records out 00:15:20.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737011 s, 5.6 MB/s 00:15:20.712 12:36:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.712 12:36:29 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.712 12:36:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.712 12:36:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.712 12:36:29 -- common/autotest_common.sh@877 -- # return 0 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.712 12:36:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd0", 00:15:20.970 "bdev_name": "nvme0n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd1", 00:15:20.970 "bdev_name": "nvme1n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd10", 00:15:20.970 "bdev_name": "nvme1n2" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd11", 00:15:20.970 "bdev_name": "nvme1n3" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd12", 00:15:20.970 "bdev_name": "nvme2n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd13", 00:15:20.970 "bdev_name": "nvme3n1" 00:15:20.970 } 00:15:20.970 ]' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd0", 00:15:20.970 "bdev_name": "nvme0n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd1", 00:15:20.970 "bdev_name": "nvme1n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd10", 00:15:20.970 "bdev_name": "nvme1n2" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd11", 00:15:20.970 "bdev_name": "nvme1n3" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd12", 00:15:20.970 "bdev_name": "nvme2n1" 00:15:20.970 }, 00:15:20.970 { 00:15:20.970 "nbd_device": "/dev/nbd13", 00:15:20.970 "bdev_name": "nvme3n1" 00:15:20.970 } 00:15:20.970 ]' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:20.970 /dev/nbd1 00:15:20.970 /dev/nbd10 00:15:20.970 /dev/nbd11 00:15:20.970 /dev/nbd12 00:15:20.970 /dev/nbd13' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:20.970 /dev/nbd1 00:15:20.970 /dev/nbd10 00:15:20.970 /dev/nbd11 00:15:20.970 /dev/nbd12 00:15:20.970 /dev/nbd13' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@65 -- # count=6 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@66 -- # echo 6 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@95 -- # count=6 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:20.970 256+0 records in 00:15:20.970 256+0 records out 00:15:20.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00856145 s, 122 MB/s 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.970 12:36:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:21.229 256+0 records in 00:15:21.229 256+0 records out 00:15:21.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142129 s, 7.4 MB/s 00:15:21.229 12:36:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.229 12:36:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:21.229 256+0 records in 00:15:21.229 256+0 records out 00:15:21.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141555 s, 7.4 MB/s 00:15:21.229 12:36:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.229 12:36:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:21.487 256+0 records in 00:15:21.487 256+0 records out 00:15:21.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140078 s, 7.5 MB/s 00:15:21.487 12:36:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.487 12:36:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:21.487 256+0 records in 00:15:21.487 256+0 records out 00:15:21.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128476 s, 8.2 MB/s 00:15:21.487 12:36:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.487 12:36:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:21.745 256+0 records in 00:15:21.745 256+0 records out 00:15:21.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175738 s, 6.0 MB/s 00:15:21.745 12:36:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:21.745 12:36:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:22.004 256+0 records in 00:15:22.004 256+0 records out 00:15:22.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153441 s, 6.8 MB/s 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@51 -- # local i 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.004 12:36:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@41 -- # break 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.264 12:36:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@41 -- # break 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.522 12:36:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@41 -- # break 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.780 12:36:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@41 -- # break 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.038 12:36:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@41 -- # break 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.297 12:36:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@41 -- # break 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.555 12:36:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@65 -- # true 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@65 -- # count=0 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@104 -- # count=0 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@109 -- # return 0 00:15:23.813 12:36:32 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:23.813 12:36:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:24.072 malloc_lvol_verify 00:15:24.072 12:36:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:24.330 8960d993-40b8-4d60-8667-545d884fc634 00:15:24.330 12:36:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:24.587 be886f8a-f50d-4948-9f71-e4ede84949ad 00:15:24.587 12:36:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:24.845 /dev/nbd0 00:15:24.845 12:36:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:24.845 mke2fs 1.46.5 (30-Dec-2021) 00:15:24.846 Discarding device blocks: 0/4096 done 00:15:24.846 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:24.846 00:15:24.846 Allocating group tables: 0/1 done 00:15:24.846 Writing inode tables: 0/1 done 00:15:24.846 Creating journal (1024 blocks): done 00:15:24.846 Writing superblocks and filesystem accounting information: 0/1 done 00:15:24.846 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@51 -- # local i 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.846 12:36:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@41 -- # break 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:25.103 12:36:33 -- bdev/nbd_common.sh@147 -- # return 0 00:15:25.103 12:36:33 -- bdev/blockdev.sh@324 -- # killprocess 69684 00:15:25.103 12:36:33 -- common/autotest_common.sh@926 -- # '[' -z 69684 ']' 00:15:25.103 12:36:33 -- common/autotest_common.sh@930 -- # kill -0 69684 00:15:25.103 12:36:33 -- common/autotest_common.sh@931 -- # uname 00:15:25.103 12:36:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.103 12:36:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69684 00:15:25.103 12:36:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.103 12:36:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.103 killing process with pid 69684 00:15:25.103 12:36:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69684' 00:15:25.103 12:36:34 -- common/autotest_common.sh@945 -- # kill 69684 00:15:25.103 12:36:34 -- common/autotest_common.sh@950 -- # wait 69684 00:15:26.473 12:36:35 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:15:26.474 00:15:26.474 real 0m12.105s 00:15:26.474 user 0m16.992s 00:15:26.474 sys 0m3.947s 00:15:26.474 12:36:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.474 12:36:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 ************************************ 00:15:26.474 END TEST bdev_nbd 00:15:26.474 ************************************ 00:15:26.474 12:36:35 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:15:26.474 12:36:35 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:26.474 12:36:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 ************************************ 00:15:26.474 START TEST bdev_fio 00:15:26.474 ************************************ 00:15:26.474 12:36:35 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@329 -- # local env_context 00:15:26.474 12:36:35 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:26.474 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:26.474 12:36:35 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:26.474 12:36:35 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:15:26.474 12:36:35 -- bdev/blockdev.sh@337 -- # echo '' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@337 -- # env_context= 00:15:26.474 12:36:35 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.474 12:36:35 -- common/autotest_common.sh@1260 -- # local workload=verify 00:15:26.474 12:36:35 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:15:26.474 12:36:35 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:26.474 12:36:35 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:26.474 12:36:35 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.474 12:36:35 -- common/autotest_common.sh@1280 -- # cat 00:15:26.474 12:36:35 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1293 -- # cat 00:15:26.474 12:36:35 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:15:26.474 12:36:35 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:26.474 12:36:35 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:15:26.474 12:36:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:26.474 12:36:35 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:15:26.474 12:36:35 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:26.474 12:36:35 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.474 12:36:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:26.474 12:36:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 ************************************ 00:15:26.474 START TEST bdev_fio_rw_verify 00:15:26.474 ************************************ 00:15:26.474 12:36:35 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.474 12:36:35 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.474 12:36:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:26.474 12:36:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.474 12:36:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:26.474 12:36:35 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.474 12:36:35 -- common/autotest_common.sh@1320 -- # shift 00:15:26.474 12:36:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:26.474 12:36:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.474 12:36:35 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.474 12:36:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:26.474 12:36:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:26.474 12:36:35 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:26.474 12:36:35 -- common/autotest_common.sh@1326 -- # break 00:15:26.474 12:36:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:26.474 12:36:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:26.732 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:26.732 fio-3.35 00:15:26.732 Starting 6 threads 00:15:38.934 00:15:38.934 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70104: Wed May 15 12:36:46 2024 00:15:38.934 read: IOPS=28.9k, BW=113MiB/s (118MB/s)(1130MiB/10001msec) 00:15:38.934 slat (usec): min=2, max=1676, avg= 6.70, stdev= 5.76 00:15:38.934 clat (usec): min=116, max=10846, avg=645.30, stdev=257.65 00:15:38.934 lat (usec): min=119, max=10859, avg=652.00, stdev=258.23 00:15:38.934 clat percentiles (usec): 00:15:38.934 | 50.000th=[ 660], 99.000th=[ 1205], 99.900th=[ 2040], 99.990th=[ 4817], 00:15:38.934 | 99.999th=[10814] 00:15:38.934 write: IOPS=29.3k, BW=115MiB/s (120MB/s)(1145MiB/10001msec); 0 zone resets 00:15:38.934 slat (usec): min=13, max=6595, avg=26.60, stdev=32.46 00:15:38.934 clat (usec): min=80, max=11184, avg=725.89, stdev=263.09 00:15:38.934 lat (usec): min=96, max=11211, avg=752.49, stdev=265.81 00:15:38.934 clat percentiles (usec): 00:15:38.934 | 50.000th=[ 734], 99.000th=[ 1369], 99.900th=[ 2024], 99.990th=[ 4490], 00:15:38.934 | 99.999th=[11207] 00:15:38.934 bw ( KiB/s): min=98240, max=144176, per=99.68%, avg=116878.37, stdev=2688.96, samples=114 00:15:38.934 iops : min=24560, max=36044, avg=29219.37, stdev=672.24, samples=114 00:15:38.934 lat (usec) : 100=0.01%, 250=2.97%, 500=19.74%, 750=37.17%, 1000=32.83% 00:15:38.934 lat (msec) : 2=7.19%, 4=0.08%, 10=0.02%, 20=0.01% 00:15:38.934 cpu : usr=60.13%, sys=25.93%, ctx=7739, majf=0, minf=26447 00:15:38.934 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.934 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.934 issued rwts: total=289285,293153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:38.934 00:15:38.934 Run status group 0 (all jobs): 00:15:38.934 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1130MiB (1185MB), run=10001-10001msec 00:15:38.934 WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1145MiB (1201MB), run=10001-10001msec 00:15:38.934 ----------------------------------------------------- 00:15:38.934 Suppressions used: 00:15:38.934 count bytes template 00:15:38.934 6 48 /usr/src/fio/parse.c 00:15:38.934 3681 353376 /usr/src/fio/iolog.c 00:15:38.934 1 8 libtcmalloc_minimal.so 00:15:38.934 1 904 libcrypto.so 00:15:38.934 ----------------------------------------------------- 00:15:38.934 00:15:38.934 00:15:38.934 real 0m12.393s 00:15:38.934 user 0m38.035s 00:15:38.934 sys 0m16.021s 00:15:38.934 12:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.934 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 ************************************ 00:15:38.934 END TEST bdev_fio_rw_verify 00:15:38.934 ************************************ 00:15:38.934 12:36:47 -- bdev/blockdev.sh@348 -- # rm -f 00:15:38.934 12:36:47 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.934 12:36:47 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:38.934 12:36:47 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.934 12:36:47 -- common/autotest_common.sh@1260 -- # local workload=trim 00:15:38.934 12:36:47 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:15:38.934 12:36:47 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:38.934 12:36:47 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:38.934 12:36:47 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:38.934 12:36:47 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:15:38.934 12:36:47 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:38.934 12:36:47 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.935 12:36:47 -- common/autotest_common.sh@1280 -- # cat 00:15:38.935 12:36:47 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:15:38.935 12:36:47 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:15:38.935 12:36:47 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:15:38.935 12:36:47 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:38.935 12:36:47 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "162eff18-d709-4f73-89ec-60d7f1286d54"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "162eff18-d709-4f73-89ec-60d7f1286d54",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "712a06a5-d794-4d38-bd13-6d0fe5a30192"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "712a06a5-d794-4d38-bd13-6d0fe5a30192",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "5eb8a8cd-7d42-4524-91ad-a33455b04a53"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb8a8cd-7d42-4524-91ad-a33455b04a53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "be6558cf-5813-4d11-8baa-97c343939ec2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "be6558cf-5813-4d11-8baa-97c343939ec2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "716a22a8-7195-47a8-9d25-e789ed316d69"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "716a22a8-7195-47a8-9d25-e789ed316d69",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e698648-6038-4ab9-9e1d-2022eb39ecad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6e698648-6038-4ab9-9e1d-2022eb39ecad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:38.935 12:36:47 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:15:38.935 12:36:47 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:38.935 /home/vagrant/spdk_repo/spdk 00:15:38.935 12:36:47 -- bdev/blockdev.sh@360 -- # popd 00:15:38.935 12:36:47 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:15:38.935 12:36:47 -- bdev/blockdev.sh@362 -- # return 0 00:15:38.935 00:15:38.935 real 0m12.590s 00:15:38.935 user 0m38.126s 00:15:38.935 sys 0m16.111s 00:15:38.935 12:36:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.935 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:15:38.935 ************************************ 00:15:38.935 END TEST bdev_fio 00:15:38.935 ************************************ 00:15:38.935 12:36:47 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:38.935 12:36:47 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:38.935 12:36:47 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:38.935 12:36:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.935 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:15:38.935 ************************************ 00:15:38.935 START TEST bdev_verify 00:15:38.935 ************************************ 00:15:38.935 12:36:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:39.200 [2024-05-15 12:36:48.013677] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:39.200 [2024-05-15 12:36:48.013853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70281 ] 00:15:39.200 [2024-05-15 12:36:48.187976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:39.458 [2024-05-15 12:36:48.452284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.458 [2024-05-15 12:36:48.452289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.025 Running I/O for 5 seconds... 00:15:45.293 00:15:45.293 Latency(us) 00:15:45.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.293 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0x20000 00:15:45.293 nvme0n1 : 5.07 2490.64 9.73 0.00 0.00 51222.83 8460.10 71970.44 00:15:45.293 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x20000 length 0x20000 00:15:45.293 nvme0n1 : 5.09 2522.49 9.85 0.00 0.00 50486.49 8519.68 68157.44 00:15:45.293 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0x80000 00:15:45.293 nvme1n1 : 5.07 2506.82 9.79 0.00 0.00 50816.46 8996.31 64344.44 00:15:45.293 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x80000 length 0x80000 00:15:45.293 nvme1n1 : 5.09 2503.28 9.78 0.00 0.00 50711.94 5749.29 70063.94 00:15:45.293 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0x80000 00:15:45.293 nvme1n2 : 5.07 2392.34 9.35 0.00 0.00 53163.95 14239.19 71493.82 00:15:45.293 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x80000 length 0x80000 00:15:45.293 nvme1n2 : 5.09 2544.17 9.94 0.00 0.00 49901.66 5540.77 65297.69 00:15:45.293 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0x80000 00:15:45.293 nvme1n3 : 5.07 2464.94 9.63 0.00 0.00 51571.25 12332.68 79119.83 00:15:45.293 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x80000 length 0x80000 00:15:45.293 nvme1n3 : 5.10 2420.85 9.46 0.00 0.00 52364.14 8579.26 70540.57 00:15:45.293 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0xbd0bd 00:15:45.293 nvme2n1 : 5.08 2594.66 10.14 0.00 0.00 48931.76 4438.57 68157.44 00:15:45.293 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:45.293 nvme2n1 : 5.10 2702.06 10.55 0.00 0.00 46885.72 3083.17 62437.93 00:15:45.293 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0x0 length 0xa0000 00:15:45.293 nvme3n1 : 5.07 2465.36 9.63 0.00 0.00 51402.41 4200.26 66727.56 00:15:45.293 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.293 Verification LBA range: start 0xa0000 length 0xa0000 00:15:45.293 nvme3n1 : 5.10 2490.21 9.73 0.00 0.00 50781.11 9770.82 68634.07 00:15:45.293 =================================================================================================================== 00:15:45.293 Total : 30097.82 117.57 0.00 0.00 50637.71 3083.17 79119.83 00:15:46.238 00:15:46.238 real 0m7.308s 00:15:46.238 user 0m9.261s 00:15:46.238 sys 0m3.537s 00:15:46.238 12:36:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.238 12:36:55 -- common/autotest_common.sh@10 -- # set +x 00:15:46.238 ************************************ 00:15:46.238 END TEST bdev_verify 00:15:46.238 ************************************ 00:15:46.497 12:36:55 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:46.497 12:36:55 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:46.497 12:36:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.497 12:36:55 -- common/autotest_common.sh@10 -- # set +x 00:15:46.497 ************************************ 00:15:46.497 START TEST bdev_verify_big_io 00:15:46.497 ************************************ 00:15:46.497 12:36:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:46.497 [2024-05-15 12:36:55.361652] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:46.497 [2024-05-15 12:36:55.361810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70386 ] 00:15:46.756 [2024-05-15 12:36:55.531014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.015 [2024-05-15 12:36:55.775383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.015 [2024-05-15 12:36:55.775390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.582 Running I/O for 5 seconds... 00:15:54.143 00:15:54.143 Latency(us) 00:15:54.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.143 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0x2000 00:15:54.143 nvme0n1 : 5.66 245.96 15.37 0.00 0.00 497638.86 55765.18 713031.68 00:15:54.143 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x2000 length 0x2000 00:15:54.143 nvme0n1 : 5.76 272.23 17.01 0.00 0.00 456473.96 73400.32 758787.72 00:15:54.143 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0x8000 00:15:54.143 nvme1n1 : 5.55 250.69 15.67 0.00 0.00 483144.79 50522.30 652023.62 00:15:54.143 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x8000 length 0x8000 00:15:54.143 nvme1n1 : 5.72 273.78 17.11 0.00 0.00 442948.69 63867.81 655836.63 00:15:54.143 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0x8000 00:15:54.143 nvme1n2 : 5.66 261.22 16.33 0.00 0.00 458381.23 54335.30 632958.60 00:15:54.143 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x8000 length 0x8000 00:15:54.143 nvme1n2 : 5.76 241.58 15.10 0.00 0.00 487663.22 63867.81 629145.60 00:15:54.143 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0x8000 00:15:54.143 nvme1n3 : 5.66 261.07 16.32 0.00 0.00 447700.37 57195.05 507129.48 00:15:54.143 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x8000 length 0x8000 00:15:54.143 nvme1n3 : 5.77 271.75 16.98 0.00 0.00 429246.54 37653.41 648210.62 00:15:54.143 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0xbd0b 00:15:54.143 nvme2n1 : 5.74 257.29 16.08 0.00 0.00 445581.43 68157.44 735909.70 00:15:54.143 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:54.143 nvme2n1 : 5.77 218.79 13.67 0.00 0.00 520436.95 33602.09 766413.73 00:15:54.143 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0x0 length 0xa000 00:15:54.143 nvme3n1 : 5.75 304.12 19.01 0.00 0.00 369795.58 7030.23 423243.40 00:15:54.143 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.143 Verification LBA range: start 0xa000 length 0xa000 00:15:54.143 nvme3n1 : 5.77 302.76 18.92 0.00 0.00 367520.76 8817.57 385113.37 00:15:54.143 =================================================================================================================== 00:15:54.143 Total : 3161.24 197.58 0.00 0.00 446585.44 7030.23 766413.73 00:15:54.712 00:15:54.712 real 0m8.273s 00:15:54.712 user 0m14.545s 00:15:54.712 sys 0m0.765s 00:15:54.712 12:37:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.712 12:37:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.712 ************************************ 00:15:54.712 END TEST bdev_verify_big_io 00:15:54.712 ************************************ 00:15:54.712 12:37:03 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:54.712 12:37:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:54.712 12:37:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:54.712 12:37:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.712 ************************************ 00:15:54.712 START TEST bdev_write_zeroes 00:15:54.712 ************************************ 00:15:54.712 12:37:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:54.712 [2024-05-15 12:37:03.686762] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:54.712 [2024-05-15 12:37:03.686921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70500 ] 00:15:54.971 [2024-05-15 12:37:03.850588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.229 [2024-05-15 12:37:04.091791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.797 Running I/O for 1 seconds... 00:15:56.800 00:15:56.800 Latency(us) 00:15:56.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.800 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme0n1 : 1.02 12228.90 47.77 0.00 0.00 10454.19 7030.23 22520.55 00:15:56.800 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme1n1 : 1.01 12325.64 48.15 0.00 0.00 10362.92 6791.91 21686.46 00:15:56.800 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme1n2 : 1.02 12246.01 47.84 0.00 0.00 10422.73 6791.91 20733.21 00:15:56.800 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme1n3 : 1.03 12225.27 47.75 0.00 0.00 10430.95 6762.12 19899.11 00:15:56.800 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme2n1 : 1.02 16559.80 64.69 0.00 0.00 7690.95 3232.12 14894.55 00:15:56.800 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:56.800 nvme3n1 : 1.03 12204.16 47.67 0.00 0.00 10377.27 6732.33 22401.40 00:15:56.800 =================================================================================================================== 00:15:56.800 Total : 77789.78 303.87 0.00 0.00 9830.80 3232.12 22520.55 00:15:57.734 00:15:57.734 real 0m3.130s 00:15:57.734 user 0m2.333s 00:15:57.734 sys 0m0.613s 00:15:57.734 12:37:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.734 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:15:57.734 ************************************ 00:15:57.734 END TEST bdev_write_zeroes 00:15:57.734 ************************************ 00:15:57.992 12:37:06 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.992 12:37:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:57.992 12:37:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.992 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:15:57.992 ************************************ 00:15:57.992 START TEST bdev_json_nonenclosed 00:15:57.992 ************************************ 00:15:57.992 12:37:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.992 [2024-05-15 12:37:06.881993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:57.992 [2024-05-15 12:37:06.882198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70558 ] 00:15:58.249 [2024-05-15 12:37:07.057215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.515 [2024-05-15 12:37:07.312949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.515 [2024-05-15 12:37:07.313226] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:58.515 [2024-05-15 12:37:07.313256] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.777 00:15:58.777 real 0m0.926s 00:15:58.777 user 0m0.671s 00:15:58.777 sys 0m0.148s 00:15:58.777 12:37:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.777 12:37:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.777 ************************************ 00:15:58.777 END TEST bdev_json_nonenclosed 00:15:58.777 ************************************ 00:15:58.777 12:37:07 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.777 12:37:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:58.777 12:37:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.777 12:37:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.777 ************************************ 00:15:58.777 START TEST bdev_json_nonarray 00:15:58.777 ************************************ 00:15:58.777 12:37:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:59.035 [2024-05-15 12:37:07.840970] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:59.035 [2024-05-15 12:37:07.841127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:15:59.035 [2024-05-15 12:37:08.002879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.303 [2024-05-15 12:37:08.240895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.303 [2024-05-15 12:37:08.241148] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:59.303 [2024-05-15 12:37:08.241178] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:59.869 00:15:59.869 real 0m0.878s 00:15:59.869 user 0m0.631s 00:15:59.869 sys 0m0.141s 00:15:59.869 12:37:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.869 12:37:08 -- common/autotest_common.sh@10 -- # set +x 00:15:59.870 ************************************ 00:15:59.870 END TEST bdev_json_nonarray 00:15:59.870 ************************************ 00:15:59.870 12:37:08 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:15:59.870 12:37:08 -- bdev/blockdev.sh@809 -- # cleanup 00:15:59.870 12:37:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:59.870 12:37:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:59.870 12:37:08 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:15:59.870 12:37:08 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:00.805 lsblk: /dev/nvme0c0n1: not a block device 00:16:00.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:03.336 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.336 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.336 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.336 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.336 00:16:03.336 real 1m4.645s 00:16:03.336 user 1m44.306s 00:16:03.336 sys 0m30.463s 00:16:03.336 12:37:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.337 12:37:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.337 ************************************ 00:16:03.337 END TEST blockdev_xnvme 00:16:03.337 ************************************ 00:16:03.337 12:37:11 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:03.337 12:37:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:03.337 12:37:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.337 12:37:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.337 ************************************ 00:16:03.337 START TEST ublk 00:16:03.337 ************************************ 00:16:03.337 12:37:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:03.337 * Looking for test storage... 00:16:03.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:03.337 12:37:12 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:03.337 12:37:12 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:03.337 12:37:12 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:03.337 12:37:12 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:03.337 12:37:12 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:03.337 12:37:12 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:03.337 12:37:12 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:03.337 12:37:12 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:03.337 12:37:12 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:03.337 12:37:12 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:03.337 12:37:12 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:03.337 12:37:12 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:03.337 12:37:12 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:03.337 12:37:12 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:03.337 12:37:12 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:03.337 12:37:12 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:03.337 12:37:12 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:03.337 12:37:12 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:03.337 12:37:12 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:03.337 12:37:12 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:03.337 12:37:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:03.337 12:37:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.337 12:37:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.337 ************************************ 00:16:03.337 START TEST test_save_ublk_config 00:16:03.337 ************************************ 00:16:03.337 12:37:12 -- common/autotest_common.sh@1104 -- # test_save_config 00:16:03.337 12:37:12 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:03.337 12:37:12 -- ublk/ublk.sh@103 -- # tgtpid=70905 00:16:03.337 12:37:12 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:03.337 12:37:12 -- ublk/ublk.sh@106 -- # waitforlisten 70905 00:16:03.337 12:37:12 -- common/autotest_common.sh@819 -- # '[' -z 70905 ']' 00:16:03.337 12:37:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.337 12:37:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.337 12:37:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.337 12:37:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.337 12:37:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.337 12:37:12 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:03.337 [2024-05-15 12:37:12.190825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:03.337 [2024-05-15 12:37:12.190995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:16:03.595 [2024-05-15 12:37:12.366222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.853 [2024-05-15 12:37:12.615691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.853 [2024-05-15 12:37:12.615936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.230 12:37:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:05.230 12:37:13 -- common/autotest_common.sh@852 -- # return 0 00:16:05.230 12:37:13 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:05.230 12:37:13 -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:05.230 12:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.230 12:37:13 -- common/autotest_common.sh@10 -- # set +x 00:16:05.230 [2024-05-15 12:37:13.891710] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:05.230 malloc0 00:16:05.230 [2024-05-15 12:37:13.975974] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:05.230 [2024-05-15 12:37:13.976100] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:05.230 [2024-05-15 12:37:13.976119] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:05.230 [2024-05-15 12:37:13.976151] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:05.230 [2024-05-15 12:37:13.983714] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:05.230 [2024-05-15 12:37:13.983746] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:05.230 [2024-05-15 12:37:13.990527] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:05.230 [2024-05-15 12:37:13.990660] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:05.230 [2024-05-15 12:37:14.007539] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:05.230 0 00:16:05.230 12:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.230 12:37:14 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:05.230 12:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.230 12:37:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.489 12:37:14 -- ublk/ublk.sh@115 -- # config='{ 00:16:05.489 "subsystems": [ 00:16:05.489 { 00:16:05.489 "subsystem": "iobuf", 00:16:05.489 "config": [ 00:16:05.489 { 00:16:05.489 "method": "iobuf_set_options", 00:16:05.489 "params": { 00:16:05.489 "small_pool_count": 8192, 00:16:05.489 "large_pool_count": 1024, 00:16:05.489 "small_bufsize": 8192, 00:16:05.489 "large_bufsize": 135168 00:16:05.489 } 00:16:05.489 } 00:16:05.489 ] 00:16:05.489 }, 00:16:05.489 { 00:16:05.489 "subsystem": "sock", 00:16:05.489 "config": [ 00:16:05.489 { 00:16:05.489 "method": "sock_impl_set_options", 00:16:05.489 "params": { 00:16:05.489 "impl_name": "posix", 00:16:05.489 "recv_buf_size": 2097152, 00:16:05.489 "send_buf_size": 2097152, 00:16:05.489 "enable_recv_pipe": true, 00:16:05.489 "enable_quickack": false, 00:16:05.489 "enable_placement_id": 0, 00:16:05.489 "enable_zerocopy_send_server": true, 00:16:05.489 "enable_zerocopy_send_client": false, 00:16:05.489 "zerocopy_threshold": 0, 00:16:05.489 "tls_version": 0, 00:16:05.489 "enable_ktls": false 00:16:05.489 } 00:16:05.489 }, 00:16:05.489 { 00:16:05.489 "method": "sock_impl_set_options", 00:16:05.489 "params": { 00:16:05.489 "impl_name": "ssl", 00:16:05.489 "recv_buf_size": 4096, 00:16:05.489 "send_buf_size": 4096, 00:16:05.489 "enable_recv_pipe": true, 00:16:05.489 "enable_quickack": false, 00:16:05.489 "enable_placement_id": 0, 00:16:05.489 "enable_zerocopy_send_server": true, 00:16:05.489 "enable_zerocopy_send_client": false, 00:16:05.489 "zerocopy_threshold": 0, 00:16:05.489 "tls_version": 0, 00:16:05.489 "enable_ktls": false 00:16:05.489 } 00:16:05.489 } 00:16:05.489 ] 00:16:05.489 }, 00:16:05.489 { 00:16:05.489 "subsystem": "vmd", 00:16:05.489 "config": [] 00:16:05.489 }, 00:16:05.489 { 00:16:05.489 "subsystem": "accel", 00:16:05.489 "config": [ 00:16:05.489 { 00:16:05.490 "method": "accel_set_options", 00:16:05.490 "params": { 00:16:05.490 "small_cache_size": 128, 00:16:05.490 "large_cache_size": 16, 00:16:05.490 "task_count": 2048, 00:16:05.490 "sequence_count": 2048, 00:16:05.490 "buf_count": 2048 00:16:05.490 } 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "bdev", 00:16:05.490 "config": [ 00:16:05.490 { 00:16:05.490 "method": "bdev_set_options", 00:16:05.490 "params": { 00:16:05.490 "bdev_io_pool_size": 65535, 00:16:05.490 "bdev_io_cache_size": 256, 00:16:05.490 "bdev_auto_examine": true, 00:16:05.490 "iobuf_small_cache_size": 128, 00:16:05.490 "iobuf_large_cache_size": 16 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_raid_set_options", 00:16:05.490 "params": { 00:16:05.490 "process_window_size_kb": 1024 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_iscsi_set_options", 00:16:05.490 "params": { 00:16:05.490 "timeout_sec": 30 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_nvme_set_options", 00:16:05.490 "params": { 00:16:05.490 "action_on_timeout": "none", 00:16:05.490 "timeout_us": 0, 00:16:05.490 "timeout_admin_us": 0, 00:16:05.490 "keep_alive_timeout_ms": 10000, 00:16:05.490 "transport_retry_count": 4, 00:16:05.490 "arbitration_burst": 0, 00:16:05.490 "low_priority_weight": 0, 00:16:05.490 "medium_priority_weight": 0, 00:16:05.490 "high_priority_weight": 0, 00:16:05.490 "nvme_adminq_poll_period_us": 10000, 00:16:05.490 "nvme_ioq_poll_period_us": 0, 00:16:05.490 "io_queue_requests": 0, 00:16:05.490 "delay_cmd_submit": true, 00:16:05.490 "bdev_retry_count": 3, 00:16:05.490 "transport_ack_timeout": 0, 00:16:05.490 "ctrlr_loss_timeout_sec": 0, 00:16:05.490 "reconnect_delay_sec": 0, 00:16:05.490 "fast_io_fail_timeout_sec": 0, 00:16:05.490 "generate_uuids": false, 00:16:05.490 "transport_tos": 0, 00:16:05.490 "io_path_stat": false, 00:16:05.490 "allow_accel_sequence": false 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_nvme_set_hotplug", 00:16:05.490 "params": { 00:16:05.490 "period_us": 100000, 00:16:05.490 "enable": false 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_malloc_create", 00:16:05.490 "params": { 00:16:05.490 "name": "malloc0", 00:16:05.490 "num_blocks": 8192, 00:16:05.490 "block_size": 4096, 00:16:05.490 "physical_block_size": 4096, 00:16:05.490 "uuid": "6604a98b-b51d-4a22-9c31-0b9a5c19988e", 00:16:05.490 "optimal_io_boundary": 0 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "bdev_wait_for_examine" 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "scsi", 00:16:05.490 "config": null 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "scheduler", 00:16:05.490 "config": [ 00:16:05.490 { 00:16:05.490 "method": "framework_set_scheduler", 00:16:05.490 "params": { 00:16:05.490 "name": "static" 00:16:05.490 } 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "vhost_scsi", 00:16:05.490 "config": [] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "vhost_blk", 00:16:05.490 "config": [] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "ublk", 00:16:05.490 "config": [ 00:16:05.490 { 00:16:05.490 "method": "ublk_create_target", 00:16:05.490 "params": { 00:16:05.490 "cpumask": "1" 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "ublk_start_disk", 00:16:05.490 "params": { 00:16:05.490 "bdev_name": "malloc0", 00:16:05.490 "ublk_id": 0, 00:16:05.490 "num_queues": 1, 00:16:05.490 "queue_depth": 128 00:16:05.490 } 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "nbd", 00:16:05.490 "config": [] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "nvmf", 00:16:05.490 "config": [ 00:16:05.490 { 00:16:05.490 "method": "nvmf_set_config", 00:16:05.490 "params": { 00:16:05.490 "discovery_filter": "match_any", 00:16:05.490 "admin_cmd_passthru": { 00:16:05.490 "identify_ctrlr": false 00:16:05.490 } 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "nvmf_set_max_subsystems", 00:16:05.490 "params": { 00:16:05.490 "max_subsystems": 1024 00:16:05.490 } 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "method": "nvmf_set_crdt", 00:16:05.490 "params": { 00:16:05.490 "crdt1": 0, 00:16:05.490 "crdt2": 0, 00:16:05.490 "crdt3": 0 00:16:05.490 } 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }, 00:16:05.490 { 00:16:05.490 "subsystem": "iscsi", 00:16:05.490 "config": [ 00:16:05.490 { 00:16:05.490 "method": "iscsi_set_options", 00:16:05.490 "params": { 00:16:05.490 "node_base": "iqn.2016-06.io.spdk", 00:16:05.490 "max_sessions": 128, 00:16:05.490 "max_connections_per_session": 2, 00:16:05.490 "max_queue_depth": 64, 00:16:05.490 "default_time2wait": 2, 00:16:05.490 "default_time2retain": 20, 00:16:05.490 "first_burst_length": 8192, 00:16:05.490 "immediate_data": true, 00:16:05.490 "allow_duplicated_isid": false, 00:16:05.490 "error_recovery_level": 0, 00:16:05.490 "nop_timeout": 60, 00:16:05.490 "nop_in_interval": 30, 00:16:05.490 "disable_chap": false, 00:16:05.490 "require_chap": false, 00:16:05.490 "mutual_chap": false, 00:16:05.490 "chap_group": 0, 00:16:05.490 "max_large_datain_per_connection": 64, 00:16:05.490 "max_r2t_per_connection": 4, 00:16:05.490 "pdu_pool_size": 36864, 00:16:05.490 "immediate_data_pool_size": 16384, 00:16:05.490 "data_out_pool_size": 2048 00:16:05.490 } 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 } 00:16:05.490 ] 00:16:05.490 }' 00:16:05.490 12:37:14 -- ublk/ublk.sh@116 -- # killprocess 70905 00:16:05.490 12:37:14 -- common/autotest_common.sh@926 -- # '[' -z 70905 ']' 00:16:05.490 12:37:14 -- common/autotest_common.sh@930 -- # kill -0 70905 00:16:05.490 12:37:14 -- common/autotest_common.sh@931 -- # uname 00:16:05.490 12:37:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.490 12:37:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70905 00:16:05.490 12:37:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.490 killing process with pid 70905 00:16:05.490 12:37:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.490 12:37:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70905' 00:16:05.490 12:37:14 -- common/autotest_common.sh@945 -- # kill 70905 00:16:05.490 12:37:14 -- common/autotest_common.sh@950 -- # wait 70905 00:16:06.865 [2024-05-15 12:37:15.683191] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.865 [2024-05-15 12:37:15.708682] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.865 [2024-05-15 12:37:15.708917] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.865 [2024-05-15 12:37:15.716551] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.865 [2024-05-15 12:37:15.716625] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:06.865 [2024-05-15 12:37:15.716639] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:06.865 [2024-05-15 12:37:15.716697] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.865 [2024-05-15 12:37:15.716900] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:08.241 12:37:16 -- ublk/ublk.sh@119 -- # tgtpid=70973 00:16:08.241 12:37:16 -- ublk/ublk.sh@121 -- # waitforlisten 70973 00:16:08.241 12:37:16 -- common/autotest_common.sh@819 -- # '[' -z 70973 ']' 00:16:08.241 12:37:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.241 12:37:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:08.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.241 12:37:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.241 12:37:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:08.241 12:37:16 -- common/autotest_common.sh@10 -- # set +x 00:16:08.241 12:37:16 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:08.241 12:37:16 -- ublk/ublk.sh@118 -- # echo '{ 00:16:08.241 "subsystems": [ 00:16:08.241 { 00:16:08.241 "subsystem": "iobuf", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "iobuf_set_options", 00:16:08.241 "params": { 00:16:08.241 "small_pool_count": 8192, 00:16:08.241 "large_pool_count": 1024, 00:16:08.241 "small_bufsize": 8192, 00:16:08.241 "large_bufsize": 135168 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "sock", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "sock_impl_set_options", 00:16:08.241 "params": { 00:16:08.241 "impl_name": "posix", 00:16:08.241 "recv_buf_size": 2097152, 00:16:08.241 "send_buf_size": 2097152, 00:16:08.241 "enable_recv_pipe": true, 00:16:08.241 "enable_quickack": false, 00:16:08.241 "enable_placement_id": 0, 00:16:08.241 "enable_zerocopy_send_server": true, 00:16:08.241 "enable_zerocopy_send_client": false, 00:16:08.241 "zerocopy_threshold": 0, 00:16:08.241 "tls_version": 0, 00:16:08.241 "enable_ktls": false 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "sock_impl_set_options", 00:16:08.241 "params": { 00:16:08.241 "impl_name": "ssl", 00:16:08.241 "recv_buf_size": 4096, 00:16:08.241 "send_buf_size": 4096, 00:16:08.241 "enable_recv_pipe": true, 00:16:08.241 "enable_quickack": false, 00:16:08.241 "enable_placement_id": 0, 00:16:08.241 "enable_zerocopy_send_server": true, 00:16:08.241 "enable_zerocopy_send_client": false, 00:16:08.241 "zerocopy_threshold": 0, 00:16:08.241 "tls_version": 0, 00:16:08.241 "enable_ktls": false 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "vmd", 00:16:08.241 "config": [] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "accel", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "accel_set_options", 00:16:08.241 "params": { 00:16:08.241 "small_cache_size": 128, 00:16:08.241 "large_cache_size": 16, 00:16:08.241 "task_count": 2048, 00:16:08.241 "sequence_count": 2048, 00:16:08.241 "buf_count": 2048 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "bdev", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "bdev_set_options", 00:16:08.241 "params": { 00:16:08.241 "bdev_io_pool_size": 65535, 00:16:08.241 "bdev_io_cache_size": 256, 00:16:08.241 "bdev_auto_examine": true, 00:16:08.241 "iobuf_small_cache_size": 128, 00:16:08.241 "iobuf_large_cache_size": 16 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_raid_set_options", 00:16:08.241 "params": { 00:16:08.241 "process_window_size_kb": 1024 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_iscsi_set_options", 00:16:08.241 "params": { 00:16:08.241 "timeout_sec": 30 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_nvme_set_options", 00:16:08.241 "params": { 00:16:08.241 "action_on_timeout": "none", 00:16:08.241 "timeout_us": 0, 00:16:08.241 "timeout_admin_us": 0, 00:16:08.241 "keep_alive_timeout_ms": 10000, 00:16:08.241 "transport_retry_count": 4, 00:16:08.241 "arbitration_burst": 0, 00:16:08.241 "low_priority_weight": 0, 00:16:08.241 "medium_priority_weight": 0, 00:16:08.241 "high_priority_weight": 0, 00:16:08.241 "nvme_adminq_poll_period_us": 10000, 00:16:08.241 "nvme_ioq_poll_period_us": 0, 00:16:08.241 "io_queue_requests": 0, 00:16:08.241 "delay_cmd_submit": true, 00:16:08.241 "bdev_retry_count": 3, 00:16:08.241 "transport_ack_timeout": 0, 00:16:08.241 "ctrlr_loss_timeout_sec": 0, 00:16:08.241 "reconnect_delay_sec": 0, 00:16:08.241 "fast_io_fail_timeout_sec": 0, 00:16:08.241 "generate_uuids": false, 00:16:08.241 "transport_tos": 0, 00:16:08.241 "io_path_stat": false, 00:16:08.241 "allow_accel_sequence": false 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_nvme_set_hotplug", 00:16:08.241 "params": { 00:16:08.241 "period_us": 100000, 00:16:08.241 "enable": false 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_malloc_create", 00:16:08.241 "params": { 00:16:08.241 "name": "malloc0", 00:16:08.241 "num_blocks": 8192, 00:16:08.241 "block_size": 4096, 00:16:08.241 "physical_block_size": 4096, 00:16:08.241 "uuid": "6604a98b-b51d-4a22-9c31-0b9a5c19988e", 00:16:08.241 "optimal_io_boundary": 0 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "bdev_wait_for_examine" 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "scsi", 00:16:08.241 "config": null 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "scheduler", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "framework_set_scheduler", 00:16:08.241 "params": { 00:16:08.241 "name": "static" 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "vhost_scsi", 00:16:08.241 "config": [] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "vhost_blk", 00:16:08.241 "config": [] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "ublk", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "ublk_create_target", 00:16:08.241 "params": { 00:16:08.241 "cpumask": "1" 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "ublk_start_disk", 00:16:08.241 "params": { 00:16:08.241 "bdev_name": "malloc0", 00:16:08.241 "ublk_id": 0, 00:16:08.241 "num_queues": 1, 00:16:08.241 "queue_depth": 128 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "nbd", 00:16:08.241 "config": [] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "nvmf", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "nvmf_set_config", 00:16:08.241 "params": { 00:16:08.241 "discovery_filter": "match_any", 00:16:08.241 "admin_cmd_passthru": { 00:16:08.241 "identify_ctrlr": false 00:16:08.241 } 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "nvmf_set_max_subsystems", 00:16:08.241 "params": { 00:16:08.241 "max_subsystems": 1024 00:16:08.241 } 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "method": "nvmf_set_crdt", 00:16:08.241 "params": { 00:16:08.241 "crdt1": 0, 00:16:08.241 "crdt2": 0, 00:16:08.241 "crdt3": 0 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }, 00:16:08.241 { 00:16:08.241 "subsystem": "iscsi", 00:16:08.241 "config": [ 00:16:08.241 { 00:16:08.241 "method": "iscsi_set_options", 00:16:08.241 "params": { 00:16:08.241 "node_base": "iqn.2016-06.io.spdk", 00:16:08.241 "max_sessions": 128, 00:16:08.241 "max_connections_per_session": 2, 00:16:08.241 "max_queue_depth": 64, 00:16:08.241 "default_time2wait": 2, 00:16:08.241 "default_time2retain": 20, 00:16:08.241 "first_burst_length": 8192, 00:16:08.241 "immediate_data": true, 00:16:08.241 "allow_duplicated_isid": false, 00:16:08.241 "error_recovery_level": 0, 00:16:08.241 "nop_timeout": 60, 00:16:08.241 "nop_in_interval": 30, 00:16:08.241 "disable_chap": false, 00:16:08.241 "require_chap": false, 00:16:08.241 "mutual_chap": false, 00:16:08.241 "chap_group": 0, 00:16:08.241 "max_large_datain_per_connection": 64, 00:16:08.241 "max_r2t_per_connection": 4, 00:16:08.241 "pdu_pool_size": 36864, 00:16:08.241 "immediate_data_pool_size": 16384, 00:16:08.241 "data_out_pool_size": 2048 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 } 00:16:08.241 ] 00:16:08.241 }' 00:16:08.241 [2024-05-15 12:37:17.139439] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:08.241 [2024-05-15 12:37:17.139634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:16:08.499 [2024-05-15 12:37:17.308392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.758 [2024-05-15 12:37:17.561093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.758 [2024-05-15 12:37:17.561327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.691 [2024-05-15 12:37:18.514655] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.691 [2024-05-15 12:37:18.521676] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:09.691 [2024-05-15 12:37:18.521797] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:09.691 [2024-05-15 12:37:18.521814] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:09.691 [2024-05-15 12:37:18.521825] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:09.691 [2024-05-15 12:37:18.530615] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:09.691 [2024-05-15 12:37:18.530642] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:09.691 [2024-05-15 12:37:18.537537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:09.691 [2024-05-15 12:37:18.537679] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:09.691 [2024-05-15 12:37:18.554539] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:09.947 12:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:09.947 12:37:18 -- common/autotest_common.sh@852 -- # return 0 00:16:09.947 12:37:18 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:09.947 12:37:18 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:09.947 12:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.947 12:37:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.947 12:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.947 12:37:18 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:09.947 12:37:18 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:09.947 12:37:18 -- ublk/ublk.sh@125 -- # killprocess 70973 00:16:09.947 12:37:18 -- common/autotest_common.sh@926 -- # '[' -z 70973 ']' 00:16:09.947 12:37:18 -- common/autotest_common.sh@930 -- # kill -0 70973 00:16:09.947 12:37:18 -- common/autotest_common.sh@931 -- # uname 00:16:09.947 12:37:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.947 12:37:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70973 00:16:09.947 12:37:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.947 12:37:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.947 killing process with pid 70973 00:16:09.947 12:37:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70973' 00:16:09.947 12:37:18 -- common/autotest_common.sh@945 -- # kill 70973 00:16:09.947 12:37:18 -- common/autotest_common.sh@950 -- # wait 70973 00:16:11.317 [2024-05-15 12:37:20.176870] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.317 [2024-05-15 12:37:20.208668] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.317 [2024-05-15 12:37:20.208856] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.317 [2024-05-15 12:37:20.216613] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.317 [2024-05-15 12:37:20.216674] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:11.317 [2024-05-15 12:37:20.216686] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:11.317 [2024-05-15 12:37:20.216741] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:11.317 [2024-05-15 12:37:20.216948] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:12.691 12:37:21 -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:12.691 00:16:12.691 real 0m9.442s 00:16:12.691 user 0m8.493s 00:16:12.691 sys 0m2.266s 00:16:12.691 12:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.691 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.691 ************************************ 00:16:12.691 END TEST test_save_ublk_config 00:16:12.691 ************************************ 00:16:12.691 12:37:21 -- ublk/ublk.sh@139 -- # spdk_pid=71053 00:16:12.691 12:37:21 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:12.691 12:37:21 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.691 12:37:21 -- ublk/ublk.sh@141 -- # waitforlisten 71053 00:16:12.691 12:37:21 -- common/autotest_common.sh@819 -- # '[' -z 71053 ']' 00:16:12.691 12:37:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.691 12:37:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.691 12:37:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.691 12:37:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.691 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.691 [2024-05-15 12:37:21.648454] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:12.691 [2024-05-15 12:37:21.648643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71053 ] 00:16:12.949 [2024-05-15 12:37:21.824011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.208 [2024-05-15 12:37:22.066563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.208 [2024-05-15 12:37:22.067027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.208 [2024-05-15 12:37:22.067044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.583 12:37:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:14.583 12:37:23 -- common/autotest_common.sh@852 -- # return 0 00:16:14.583 12:37:23 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:14.583 12:37:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:14.583 12:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.583 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.583 ************************************ 00:16:14.583 START TEST test_create_ublk 00:16:14.583 ************************************ 00:16:14.583 12:37:23 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:16:14.583 12:37:23 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:14.583 12:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.583 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.583 [2024-05-15 12:37:23.328271] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:14.583 12:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.583 12:37:23 -- ublk/ublk.sh@33 -- # ublk_target= 00:16:14.583 12:37:23 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:14.583 12:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.583 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.841 12:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.841 12:37:23 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:14.841 12:37:23 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:14.841 12:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.841 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.841 [2024-05-15 12:37:23.610691] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:14.841 [2024-05-15 12:37:23.611228] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:14.841 [2024-05-15 12:37:23.611257] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:14.841 [2024-05-15 12:37:23.611280] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:14.841 [2024-05-15 12:37:23.619889] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:14.841 [2024-05-15 12:37:23.619923] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:14.841 [2024-05-15 12:37:23.626537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:14.841 [2024-05-15 12:37:23.646817] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:14.841 [2024-05-15 12:37:23.658550] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:14.841 12:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.841 12:37:23 -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:14.841 12:37:23 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:14.841 12:37:23 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:14.841 12:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.841 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.841 12:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.841 12:37:23 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:14.841 { 00:16:14.841 "ublk_device": "/dev/ublkb0", 00:16:14.841 "id": 0, 00:16:14.841 "queue_depth": 512, 00:16:14.841 "num_queues": 4, 00:16:14.841 "bdev_name": "Malloc0" 00:16:14.841 } 00:16:14.841 ]' 00:16:14.841 12:37:23 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:14.841 12:37:23 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:14.841 12:37:23 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:14.841 12:37:23 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:14.841 12:37:23 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:15.099 12:37:23 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:15.099 12:37:23 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:15.099 12:37:23 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:15.099 12:37:23 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:15.099 12:37:23 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:15.099 12:37:23 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:15.099 12:37:23 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:15.099 12:37:23 -- lvol/common.sh@41 -- # local offset=0 00:16:15.099 12:37:23 -- lvol/common.sh@42 -- # local size=134217728 00:16:15.099 12:37:23 -- lvol/common.sh@43 -- # local rw=write 00:16:15.099 12:37:23 -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:15.099 12:37:23 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:15.099 12:37:23 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:15.099 12:37:23 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:15.099 12:37:23 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:15.099 12:37:23 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:15.099 12:37:23 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:15.099 fio: verification read phase will never start because write phase uses all of runtime 00:16:15.099 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:15.099 fio-3.35 00:16:15.099 Starting 1 process 00:16:27.302 00:16:27.302 fio_test: (groupid=0, jobs=1): err= 0: pid=71113: Wed May 15 12:37:34 2024 00:16:27.302 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(400MiB/10001msec); 0 zone resets 00:16:27.302 clat (usec): min=54, max=7312, avg=96.27, stdev=141.40 00:16:27.302 lat (usec): min=55, max=7338, avg=97.04, stdev=141.42 00:16:27.302 clat percentiles (usec): 00:16:27.302 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 73], 00:16:27.302 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 83], 00:16:27.302 | 70.00th=[ 90], 80.00th=[ 115], 90.00th=[ 126], 95.00th=[ 137], 00:16:27.302 | 99.00th=[ 163], 99.50th=[ 186], 99.90th=[ 2933], 99.95th=[ 3425], 00:16:27.302 | 99.99th=[ 4047] 00:16:27.303 bw ( KiB/s): min=29520, max=48456, per=99.93%, avg=40920.00, stdev=8465.88, samples=19 00:16:27.303 iops : min= 7380, max=12114, avg=10230.00, stdev=2116.47, samples=19 00:16:27.303 lat (usec) : 100=74.37%, 250=25.26%, 500=0.03%, 750=0.02%, 1000=0.03% 00:16:27.303 lat (msec) : 2=0.12%, 4=0.17%, 10=0.01% 00:16:27.303 cpu : usr=2.09%, sys=6.67%, ctx=102379, majf=0, minf=797 00:16:27.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.303 issued rwts: total=0,102379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.303 00:16:27.303 Run status group 0 (all jobs): 00:16:27.303 WRITE: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=400MiB (419MB), run=10001-10001msec 00:16:27.303 00:16:27.303 Disk stats (read/write): 00:16:27.303 ublkb0: ios=0/101301, merge=0/0, ticks=0/9064, in_queue=9064, util=99.11% 00:16:27.303 12:37:34 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:34.193406] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:27.303 [2024-05-15 12:37:34.235644] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:27.303 [2024-05-15 12:37:34.237261] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:27.303 [2024-05-15 12:37:34.240369] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:27.303 [2024-05-15 12:37:34.240755] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:27.303 [2024-05-15 12:37:34.240776] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:27.303 12:37:34 -- common/autotest_common.sh@640 -- # local es=0 00:16:27.303 12:37:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:27.303 12:37:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:27.303 12:37:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.303 12:37:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:27.303 12:37:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.303 12:37:34 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:34.257767] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:27.303 request: 00:16:27.303 { 00:16:27.303 "ublk_id": 0, 00:16:27.303 "method": "ublk_stop_disk", 00:16:27.303 "req_id": 1 00:16:27.303 } 00:16:27.303 Got JSON-RPC error response 00:16:27.303 response: 00:16:27.303 { 00:16:27.303 "code": -19, 00:16:27.303 "message": "No such device" 00:16:27.303 } 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:27.303 12:37:34 -- common/autotest_common.sh@643 -- # es=1 00:16:27.303 12:37:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:27.303 12:37:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:27.303 12:37:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:27.303 12:37:34 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:34.273696] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:27.303 [2024-05-15 12:37:34.281529] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:27.303 [2024-05-15 12:37:34.281616] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:27.303 12:37:34 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:27.303 12:37:34 -- lvol/common.sh@26 -- # jq length 00:16:27.303 12:37:34 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:27.303 12:37:34 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:27.303 12:37:34 -- lvol/common.sh@28 -- # jq length 00:16:27.303 12:37:34 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:27.303 00:16:27.303 real 0m11.472s 00:16:27.303 user 0m0.670s 00:16:27.303 sys 0m0.759s 00:16:27.303 12:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 ************************************ 00:16:27.303 END TEST test_create_ublk 00:16:27.303 ************************************ 00:16:27.303 12:37:34 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:27.303 12:37:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:27.303 12:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 ************************************ 00:16:27.303 START TEST test_create_multi_ublk 00:16:27.303 ************************************ 00:16:27.303 12:37:34 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:16:27.303 12:37:34 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:34.852641] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:27.303 12:37:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:34 -- ublk/ublk.sh@62 -- # ublk_target= 00:16:27.303 12:37:34 -- ublk/ublk.sh@64 -- # seq 0 3 00:16:27.303 12:37:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.303 12:37:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:27.303 12:37:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:27.303 12:37:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:27.303 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:35.159799] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:27.303 [2024-05-15 12:37:35.160402] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:27.303 [2024-05-15 12:37:35.160424] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:27.303 [2024-05-15 12:37:35.160440] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.303 [2024-05-15 12:37:35.168996] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.303 [2024-05-15 12:37:35.169060] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.303 [2024-05-15 12:37:35.175550] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.303 [2024-05-15 12:37:35.176725] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:27.303 [2024-05-15 12:37:35.188094] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.303 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:35 -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:27.303 12:37:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.303 12:37:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:27.303 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:27.303 12:37:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:27.303 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:35.524751] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:27.303 [2024-05-15 12:37:35.525404] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:27.303 [2024-05-15 12:37:35.525442] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:27.303 [2024-05-15 12:37:35.525454] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.303 [2024-05-15 12:37:35.532607] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.303 [2024-05-15 12:37:35.532650] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.303 [2024-05-15 12:37:35.540574] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.303 [2024-05-15 12:37:35.541734] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:27.303 [2024-05-15 12:37:35.549606] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.303 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:35 -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:27.303 12:37:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.303 12:37:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:27.303 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.303 12:37:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:27.303 12:37:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:27.303 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.303 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.303 [2024-05-15 12:37:35.871804] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:27.303 [2024-05-15 12:37:35.872422] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:27.303 [2024-05-15 12:37:35.872444] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:27.303 [2024-05-15 12:37:35.872463] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.303 [2024-05-15 12:37:35.879667] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.303 [2024-05-15 12:37:35.879717] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.304 [2024-05-15 12:37:35.887677] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.304 [2024-05-15 12:37:35.888886] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:27.304 [2024-05-15 12:37:35.893715] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.304 12:37:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.304 12:37:35 -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:27.304 12:37:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.304 12:37:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:27.304 12:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.304 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.304 12:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.304 12:37:36 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:27.304 12:37:36 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:27.304 12:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.304 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.304 [2024-05-15 12:37:36.219785] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:27.304 [2024-05-15 12:37:36.220413] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:27.304 [2024-05-15 12:37:36.220438] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:27.304 [2024-05-15 12:37:36.220450] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.304 [2024-05-15 12:37:36.228999] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.304 [2024-05-15 12:37:36.229053] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.304 [2024-05-15 12:37:36.235647] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.304 [2024-05-15 12:37:36.236879] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:27.304 [2024-05-15 12:37:36.245187] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.304 12:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.304 12:37:36 -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:27.304 12:37:36 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:27.304 12:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.304 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.304 12:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.304 12:37:36 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:27.304 { 00:16:27.304 "ublk_device": "/dev/ublkb0", 00:16:27.304 "id": 0, 00:16:27.304 "queue_depth": 512, 00:16:27.304 "num_queues": 4, 00:16:27.304 "bdev_name": "Malloc0" 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "ublk_device": "/dev/ublkb1", 00:16:27.304 "id": 1, 00:16:27.304 "queue_depth": 512, 00:16:27.304 "num_queues": 4, 00:16:27.304 "bdev_name": "Malloc1" 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "ublk_device": "/dev/ublkb2", 00:16:27.304 "id": 2, 00:16:27.304 "queue_depth": 512, 00:16:27.304 "num_queues": 4, 00:16:27.304 "bdev_name": "Malloc2" 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "ublk_device": "/dev/ublkb3", 00:16:27.304 "id": 3, 00:16:27.304 "queue_depth": 512, 00:16:27.304 "num_queues": 4, 00:16:27.304 "bdev_name": "Malloc3" 00:16:27.304 } 00:16:27.304 ]' 00:16:27.304 12:37:36 -- ublk/ublk.sh@72 -- # seq 0 3 00:16:27.304 12:37:36 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.304 12:37:36 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:27.563 12:37:36 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:27.563 12:37:36 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:27.563 12:37:36 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:27.563 12:37:36 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:27.563 12:37:36 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.563 12:37:36 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:27.563 12:37:36 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.563 12:37:36 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:27.563 12:37:36 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:27.563 12:37:36 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.563 12:37:36 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:27.821 12:37:36 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:27.821 12:37:36 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:27.821 12:37:36 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:27.821 12:37:36 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:27.821 12:37:36 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.821 12:37:36 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:27.821 12:37:36 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.821 12:37:36 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:27.821 12:37:36 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:27.821 12:37:36 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.821 12:37:36 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:28.080 12:37:36 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:28.080 12:37:36 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:28.080 12:37:36 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:28.080 12:37:36 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:28.080 12:37:36 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.080 12:37:36 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:28.080 12:37:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.080 12:37:37 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:28.080 12:37:37 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:28.080 12:37:37 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.080 12:37:37 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:28.338 12:37:37 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:28.338 12:37:37 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:28.338 12:37:37 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:28.338 12:37:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:28.338 12:37:37 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:28.338 12:37:37 -- ublk/ublk.sh@85 -- # seq 0 3 00:16:28.338 12:37:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.338 12:37:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:28.338 12:37:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.338 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.338 [2024-05-15 12:37:37.337064] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.597 [2024-05-15 12:37:37.374652] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.597 [2024-05-15 12:37:37.380021] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.597 [2024-05-15 12:37:37.388590] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.597 [2024-05-15 12:37:37.389093] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:28.597 [2024-05-15 12:37:37.389121] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:28.597 12:37:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.597 12:37:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.597 12:37:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:28.597 12:37:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.597 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.597 [2024-05-15 12:37:37.396765] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.597 [2024-05-15 12:37:37.434692] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.597 [2024-05-15 12:37:37.440031] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.597 [2024-05-15 12:37:37.447604] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.597 [2024-05-15 12:37:37.448052] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:28.597 [2024-05-15 12:37:37.448097] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:28.597 12:37:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.597 12:37:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.597 12:37:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:28.597 12:37:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.597 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.597 [2024-05-15 12:37:37.455751] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.597 [2024-05-15 12:37:37.492649] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.597 [2024-05-15 12:37:37.494522] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.597 [2024-05-15 12:37:37.501578] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.597 [2024-05-15 12:37:37.502075] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:28.597 [2024-05-15 12:37:37.502107] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:28.597 12:37:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.597 12:37:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.597 12:37:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:28.597 12:37:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.597 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.597 [2024-05-15 12:37:37.509751] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.597 [2024-05-15 12:37:37.552640] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.597 [2024-05-15 12:37:37.554322] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.597 [2024-05-15 12:37:37.559595] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.597 [2024-05-15 12:37:37.560021] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:28.597 [2024-05-15 12:37:37.560050] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:28.597 12:37:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.597 12:37:37 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:28.855 [2024-05-15 12:37:37.824799] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:28.855 [2024-05-15 12:37:37.831826] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:28.855 [2024-05-15 12:37:37.831897] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:28.855 12:37:37 -- ublk/ublk.sh@93 -- # seq 0 3 00:16:28.855 12:37:37 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.855 12:37:37 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:28.855 12:37:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.855 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:16:29.421 12:37:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.421 12:37:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.421 12:37:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:29.421 12:37:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.421 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:16:29.696 12:37:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.696 12:37:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.696 12:37:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:29.696 12:37:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.696 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:16:29.969 12:37:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.969 12:37:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.969 12:37:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:29.969 12:37:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.969 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 12:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.537 12:37:39 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:30.537 12:37:39 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:30.537 12:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.537 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 12:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.537 12:37:39 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:30.537 12:37:39 -- lvol/common.sh@26 -- # jq length 00:16:30.537 12:37:39 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:30.537 12:37:39 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:30.537 12:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.537 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 12:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.537 12:37:39 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:30.537 12:37:39 -- lvol/common.sh@28 -- # jq length 00:16:30.537 12:37:39 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:30.537 ************************************ 00:16:30.537 END TEST test_create_multi_ublk 00:16:30.537 ************************************ 00:16:30.537 00:16:30.537 real 0m4.653s 00:16:30.537 user 0m1.373s 00:16:30.537 sys 0m0.172s 00:16:30.537 12:37:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.537 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 12:37:39 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:30.537 12:37:39 -- ublk/ublk.sh@147 -- # cleanup 00:16:30.537 12:37:39 -- ublk/ublk.sh@130 -- # killprocess 71053 00:16:30.537 12:37:39 -- common/autotest_common.sh@926 -- # '[' -z 71053 ']' 00:16:30.537 12:37:39 -- common/autotest_common.sh@930 -- # kill -0 71053 00:16:30.537 12:37:39 -- common/autotest_common.sh@931 -- # uname 00:16:30.537 12:37:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.537 12:37:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71053 00:16:30.796 killing process with pid 71053 00:16:30.796 12:37:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.796 12:37:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.796 12:37:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71053' 00:16:30.796 12:37:39 -- common/autotest_common.sh@945 -- # kill 71053 00:16:30.796 12:37:39 -- common/autotest_common.sh@950 -- # wait 71053 00:16:31.731 [2024-05-15 12:37:40.719803] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.731 [2024-05-15 12:37:40.719866] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:33.107 ************************************ 00:16:33.107 END TEST ublk 00:16:33.107 ************************************ 00:16:33.107 00:16:33.107 real 0m29.935s 00:16:33.107 user 0m45.965s 00:16:33.107 sys 0m8.179s 00:16:33.107 12:37:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.107 12:37:41 -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 12:37:41 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:33.107 12:37:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:33.107 12:37:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.107 12:37:41 -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 ************************************ 00:16:33.107 START TEST ublk_recovery 00:16:33.107 ************************************ 00:16:33.107 12:37:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:33.107 * Looking for test storage... 00:16:33.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:33.107 12:37:42 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:33.107 12:37:42 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:33.107 12:37:42 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:33.107 12:37:42 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:33.107 12:37:42 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:33.107 12:37:42 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:33.107 12:37:42 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:33.107 12:37:42 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71456 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.107 12:37:42 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71456 00:16:33.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.107 12:37:42 -- common/autotest_common.sh@819 -- # '[' -z 71456 ']' 00:16:33.107 12:37:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.107 12:37:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:33.107 12:37:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.107 12:37:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:33.107 12:37:42 -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 [2024-05-15 12:37:42.111817] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:33.107 [2024-05-15 12:37:42.111960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71456 ] 00:16:33.364 [2024-05-15 12:37:42.277611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.623 [2024-05-15 12:37:42.567320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.623 [2024-05-15 12:37:42.567728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.623 [2024-05-15 12:37:42.567751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.000 12:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.000 12:37:43 -- common/autotest_common.sh@852 -- # return 0 00:16:35.000 12:37:43 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:35.000 12:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.000 12:37:43 -- common/autotest_common.sh@10 -- # set +x 00:16:35.000 [2024-05-15 12:37:43.788590] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:35.000 12:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.000 12:37:43 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:35.000 12:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.000 12:37:43 -- common/autotest_common.sh@10 -- # set +x 00:16:35.000 malloc0 00:16:35.000 12:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.000 12:37:43 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:35.000 12:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.000 12:37:43 -- common/autotest_common.sh@10 -- # set +x 00:16:35.000 [2024-05-15 12:37:43.940758] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:35.000 [2024-05-15 12:37:43.940915] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:35.000 [2024-05-15 12:37:43.940932] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:35.000 [2024-05-15 12:37:43.940946] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:35.000 [2024-05-15 12:37:43.948514] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:35.000 [2024-05-15 12:37:43.948549] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:35.000 [2024-05-15 12:37:43.956543] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:35.000 [2024-05-15 12:37:43.956765] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:35.000 [2024-05-15 12:37:43.987555] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:35.000 1 00:16:35.000 12:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.000 12:37:43 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:36.374 12:37:44 -- ublk/ublk_recovery.sh@31 -- # fio_proc=71499 00:16:36.374 12:37:44 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:36.374 12:37:45 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:36.374 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.374 fio-3.35 00:16:36.374 Starting 1 process 00:16:41.653 12:37:50 -- ublk/ublk_recovery.sh@36 -- # kill -9 71456 00:16:41.653 12:37:50 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:46.976 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71456 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:46.976 12:37:55 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71610 00:16:46.976 12:37:55 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.976 12:37:55 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:46.976 12:37:55 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71610 00:16:46.976 12:37:55 -- common/autotest_common.sh@819 -- # '[' -z 71610 ']' 00:16:46.976 12:37:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.976 12:37:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:46.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.976 12:37:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.976 12:37:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:46.976 12:37:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.976 [2024-05-15 12:37:55.108065] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:46.976 [2024-05-15 12:37:55.108224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71610 ] 00:16:46.976 [2024-05-15 12:37:55.269785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.976 [2024-05-15 12:37:55.515672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:46.976 [2024-05-15 12:37:55.516314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.976 [2024-05-15 12:37:55.516346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.908 12:37:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:47.908 12:37:56 -- common/autotest_common.sh@852 -- # return 0 00:16:47.908 12:37:56 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:47.909 12:37:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.909 12:37:56 -- common/autotest_common.sh@10 -- # set +x 00:16:47.909 [2024-05-15 12:37:56.783355] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.909 12:37:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.909 12:37:56 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:47.909 12:37:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.909 12:37:56 -- common/autotest_common.sh@10 -- # set +x 00:16:48.166 malloc0 00:16:48.166 12:37:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.166 12:37:56 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:48.166 12:37:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.166 12:37:56 -- common/autotest_common.sh@10 -- # set +x 00:16:48.166 [2024-05-15 12:37:56.931763] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:48.166 [2024-05-15 12:37:56.931839] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:48.166 [2024-05-15 12:37:56.931856] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:48.166 [2024-05-15 12:37:56.939589] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:48.166 [2024-05-15 12:37:56.939618] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:48.166 [2024-05-15 12:37:56.939732] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:48.166 1 00:16:48.166 12:37:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.166 12:37:56 -- ublk/ublk_recovery.sh@52 -- # wait 71499 00:17:14.699 [2024-05-15 12:38:20.038550] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:14.699 [2024-05-15 12:38:20.044940] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:14.699 [2024-05-15 12:38:20.050561] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:14.699 [2024-05-15 12:38:20.050618] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:36.655 00:17:36.655 fio_test: (groupid=0, jobs=1): err= 0: pid=71502: Wed May 15 12:38:45 2024 00:17:36.655 read: IOPS=9773, BW=38.2MiB/s (40.0MB/s)(2291MiB/60003msec) 00:17:36.655 slat (nsec): min=1960, max=259272, avg=6697.95, stdev=2966.36 00:17:36.655 clat (usec): min=1248, max=30058k, avg=6692.69, stdev=323603.47 00:17:36.655 lat (usec): min=1265, max=30058k, avg=6699.39, stdev=323603.47 00:17:36.655 clat percentiles (msec): 00:17:36.655 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:36.655 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:17:36.655 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:36.655 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:36.655 | 99.99th=[17113] 00:17:36.655 bw ( KiB/s): min=16416, max=85344, per=100.00%, avg=76988.40, stdev=11942.68, samples=60 00:17:36.655 iops : min= 4104, max=21336, avg=19247.10, stdev=2985.67, samples=60 00:17:36.655 write: IOPS=9764, BW=38.1MiB/s (40.0MB/s)(2289MiB/60003msec); 0 zone resets 00:17:36.655 slat (nsec): min=1980, max=190477, avg=6708.92, stdev=2998.60 00:17:36.655 clat (usec): min=1186, max=30058k, avg=6394.56, stdev=304124.47 00:17:36.655 lat (usec): min=1195, max=30058k, avg=6401.27, stdev=304124.48 00:17:36.655 clat percentiles (msec): 00:17:36.655 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:36.655 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:36.655 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:36.655 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:36.655 | 99.99th=[17113] 00:17:36.655 bw ( KiB/s): min=16416, max=85152, per=100.00%, avg=76886.27, stdev=11997.87, samples=60 00:17:36.655 iops : min= 4104, max=21288, avg=19221.57, stdev=2999.47, samples=60 00:17:36.655 lat (msec) : 2=0.07%, 4=92.88%, 10=7.00%, 20=0.04%, >=2000=0.01% 00:17:36.655 cpu : usr=5.52%, sys=12.23%, ctx=37333, majf=0, minf=14 00:17:36.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:36.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.655 issued rwts: total=586448,585871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.655 00:17:36.655 Run status group 0 (all jobs): 00:17:36.655 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=2291MiB (2402MB), run=60003-60003msec 00:17:36.655 WRITE: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=2289MiB (2400MB), run=60003-60003msec 00:17:36.655 00:17:36.655 Disk stats (read/write): 00:17:36.655 ublkb1: ios=584005/583495, merge=0/0, ticks=3864951/3617929, in_queue=7482880, util=99.94% 00:17:36.655 12:38:45 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:36.655 12:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.655 12:38:45 -- common/autotest_common.sh@10 -- # set +x 00:17:36.655 [2024-05-15 12:38:45.268167] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:36.655 [2024-05-15 12:38:45.313705] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:36.655 [2024-05-15 12:38:45.313937] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:36.656 [2024-05-15 12:38:45.319534] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:36.656 [2024-05-15 12:38:45.319721] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:36.656 [2024-05-15 12:38:45.319756] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:36.656 12:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.656 12:38:45 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:36.656 12:38:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.656 12:38:45 -- common/autotest_common.sh@10 -- # set +x 00:17:36.656 [2024-05-15 12:38:45.329697] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:36.656 [2024-05-15 12:38:45.337572] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:36.656 [2024-05-15 12:38:45.337618] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:36.656 12:38:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.656 12:38:45 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:36.656 12:38:45 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:36.656 12:38:45 -- ublk/ublk_recovery.sh@14 -- # killprocess 71610 00:17:36.656 12:38:45 -- common/autotest_common.sh@926 -- # '[' -z 71610 ']' 00:17:36.656 12:38:45 -- common/autotest_common.sh@930 -- # kill -0 71610 00:17:36.656 12:38:45 -- common/autotest_common.sh@931 -- # uname 00:17:36.656 12:38:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.656 12:38:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71610 00:17:36.656 killing process with pid 71610 00:17:36.656 12:38:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:36.656 12:38:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:36.656 12:38:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71610' 00:17:36.656 12:38:45 -- common/autotest_common.sh@945 -- # kill 71610 00:17:36.656 12:38:45 -- common/autotest_common.sh@950 -- # wait 71610 00:17:37.591 [2024-05-15 12:38:46.398714] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:37.591 [2024-05-15 12:38:46.398775] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:39.043 00:17:39.043 real 1m5.799s 00:17:39.043 user 1m53.115s 00:17:39.043 sys 0m18.448s 00:17:39.043 ************************************ 00:17:39.043 END TEST ublk_recovery 00:17:39.043 ************************************ 00:17:39.043 12:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.043 12:38:47 -- common/autotest_common.sh@10 -- # set +x 00:17:39.043 12:38:47 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@268 -- # timing_exit lib 00:17:39.043 12:38:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:39.043 12:38:47 -- common/autotest_common.sh@10 -- # set +x 00:17:39.043 12:38:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:39.043 12:38:47 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:39.043 12:38:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:39.043 12:38:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.043 12:38:47 -- common/autotest_common.sh@10 -- # set +x 00:17:39.043 ************************************ 00:17:39.043 START TEST ftl 00:17:39.043 ************************************ 00:17:39.043 12:38:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:39.043 * Looking for test storage... 00:17:39.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.043 12:38:47 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:39.043 12:38:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:39.043 12:38:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.043 12:38:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.043 12:38:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:39.043 12:38:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:39.043 12:38:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.043 12:38:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:39.043 12:38:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:39.043 12:38:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.043 12:38:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.043 12:38:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:39.043 12:38:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:39.043 12:38:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:39.043 12:38:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:39.043 12:38:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:39.043 12:38:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:39.043 12:38:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.043 12:38:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.043 12:38:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:39.043 12:38:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:39.043 12:38:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:39.043 12:38:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:39.043 12:38:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:39.043 12:38:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:39.043 12:38:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:39.043 12:38:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:39.043 12:38:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.043 12:38:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.043 12:38:47 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.043 12:38:47 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:39.043 12:38:47 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:39.043 12:38:47 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:39.043 12:38:47 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:39.043 12:38:47 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.611 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.611 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.611 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.611 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.611 12:38:48 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72411 00:17:39.611 12:38:48 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:39.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.611 12:38:48 -- ftl/ftl.sh@38 -- # waitforlisten 72411 00:17:39.611 12:38:48 -- common/autotest_common.sh@819 -- # '[' -z 72411 ']' 00:17:39.611 12:38:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.611 12:38:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.611 12:38:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.611 12:38:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.611 12:38:48 -- common/autotest_common.sh@10 -- # set +x 00:17:39.611 [2024-05-15 12:38:48.535442] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:39.611 [2024-05-15 12:38:48.535637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72411 ] 00:17:39.868 [2024-05-15 12:38:48.708188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.127 [2024-05-15 12:38:48.942478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.127 [2024-05-15 12:38:48.942803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.694 12:38:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.694 12:38:49 -- common/autotest_common.sh@852 -- # return 0 00:17:40.694 12:38:49 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:40.694 12:38:49 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:42.069 12:38:50 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:42.069 12:38:50 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:42.328 12:38:51 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:42.328 12:38:51 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:42.328 12:38:51 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:42.586 12:38:51 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:17:42.586 12:38:51 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:42.586 12:38:51 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:17:42.586 12:38:51 -- ftl/ftl.sh@50 -- # break 00:17:42.586 12:38:51 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:17:42.586 12:38:51 -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:42.586 12:38:51 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:42.586 12:38:51 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:42.845 12:38:51 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:17:42.845 12:38:51 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:42.845 12:38:51 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:17:42.845 12:38:51 -- ftl/ftl.sh@63 -- # break 00:17:42.845 12:38:51 -- ftl/ftl.sh@66 -- # killprocess 72411 00:17:42.845 12:38:51 -- common/autotest_common.sh@926 -- # '[' -z 72411 ']' 00:17:42.845 12:38:51 -- common/autotest_common.sh@930 -- # kill -0 72411 00:17:42.845 12:38:51 -- common/autotest_common.sh@931 -- # uname 00:17:42.845 12:38:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.845 12:38:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72411 00:17:42.845 killing process with pid 72411 00:17:42.845 12:38:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.845 12:38:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.845 12:38:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72411' 00:17:42.845 12:38:51 -- common/autotest_common.sh@945 -- # kill 72411 00:17:42.845 12:38:51 -- common/autotest_common.sh@950 -- # wait 72411 00:17:45.448 12:38:53 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:17:45.448 12:38:53 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:17:45.448 12:38:53 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:45.448 12:38:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:45.448 12:38:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.448 12:38:53 -- common/autotest_common.sh@10 -- # set +x 00:17:45.448 ************************************ 00:17:45.448 START TEST ftl_fio_basic 00:17:45.448 ************************************ 00:17:45.448 12:38:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:45.448 * Looking for test storage... 00:17:45.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.448 12:38:53 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.448 12:38:54 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:45.448 12:38:54 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.448 12:38:54 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.448 12:38:54 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.448 12:38:54 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.448 12:38:54 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.448 12:38:54 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.448 12:38:54 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.448 12:38:54 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.448 12:38:54 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.448 12:38:54 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.448 12:38:54 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.448 12:38:54 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.448 12:38:54 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.448 12:38:54 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.448 12:38:54 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.448 12:38:54 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.448 12:38:54 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.448 12:38:54 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.448 12:38:54 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.448 12:38:54 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.448 12:38:54 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.448 12:38:54 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.448 12:38:54 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.448 12:38:54 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.448 12:38:54 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.448 12:38:54 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.448 12:38:54 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.448 12:38:54 -- ftl/fio.sh@11 -- # declare -A suite 00:17:45.448 12:38:54 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:45.448 12:38:54 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:45.448 12:38:54 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:45.448 12:38:54 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.448 12:38:54 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:17:45.448 12:38:54 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:17:45.448 12:38:54 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:45.448 12:38:54 -- ftl/fio.sh@26 -- # uuid= 00:17:45.448 12:38:54 -- ftl/fio.sh@27 -- # timeout=240 00:17:45.448 12:38:54 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:45.448 12:38:54 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:45.448 12:38:54 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:45.448 12:38:54 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:45.448 12:38:54 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.449 12:38:54 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.449 12:38:54 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:45.449 12:38:54 -- ftl/fio.sh@45 -- # svcpid=72545 00:17:45.449 12:38:54 -- ftl/fio.sh@46 -- # waitforlisten 72545 00:17:45.449 12:38:54 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:45.449 12:38:54 -- common/autotest_common.sh@819 -- # '[' -z 72545 ']' 00:17:45.449 12:38:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.449 12:38:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:45.449 12:38:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.449 12:38:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:45.449 12:38:54 -- common/autotest_common.sh@10 -- # set +x 00:17:45.449 [2024-05-15 12:38:54.125128] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:45.449 [2024-05-15 12:38:54.125583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 00:17:45.449 [2024-05-15 12:38:54.291247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.707 [2024-05-15 12:38:54.534051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.707 [2024-05-15 12:38:54.534451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.707 [2024-05-15 12:38:54.534651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.707 [2024-05-15 12:38:54.534668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.081 12:38:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:47.081 12:38:55 -- common/autotest_common.sh@852 -- # return 0 00:17:47.081 12:38:55 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:47.081 12:38:55 -- ftl/common.sh@54 -- # local name=nvme0 00:17:47.081 12:38:55 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:47.081 12:38:55 -- ftl/common.sh@56 -- # local size=103424 00:17:47.081 12:38:55 -- ftl/common.sh@59 -- # local base_bdev 00:17:47.081 12:38:55 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:47.081 12:38:56 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:47.081 12:38:56 -- ftl/common.sh@62 -- # local base_size 00:17:47.081 12:38:56 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:47.081 12:38:56 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:47.081 12:38:56 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:47.081 12:38:56 -- common/autotest_common.sh@1359 -- # local bs 00:17:47.081 12:38:56 -- common/autotest_common.sh@1360 -- # local nb 00:17:47.081 12:38:56 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.341 12:38:56 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:47.341 { 00:17:47.341 "name": "nvme0n1", 00:17:47.341 "aliases": [ 00:17:47.341 "0c5e8468-55c6-4b5e-b256-59a01815b2cc" 00:17:47.341 ], 00:17:47.341 "product_name": "NVMe disk", 00:17:47.341 "block_size": 4096, 00:17:47.341 "num_blocks": 1310720, 00:17:47.341 "uuid": "0c5e8468-55c6-4b5e-b256-59a01815b2cc", 00:17:47.341 "assigned_rate_limits": { 00:17:47.341 "rw_ios_per_sec": 0, 00:17:47.341 "rw_mbytes_per_sec": 0, 00:17:47.341 "r_mbytes_per_sec": 0, 00:17:47.341 "w_mbytes_per_sec": 0 00:17:47.341 }, 00:17:47.341 "claimed": false, 00:17:47.341 "zoned": false, 00:17:47.341 "supported_io_types": { 00:17:47.341 "read": true, 00:17:47.341 "write": true, 00:17:47.341 "unmap": true, 00:17:47.341 "write_zeroes": true, 00:17:47.341 "flush": true, 00:17:47.341 "reset": true, 00:17:47.341 "compare": true, 00:17:47.341 "compare_and_write": false, 00:17:47.341 "abort": true, 00:17:47.341 "nvme_admin": true, 00:17:47.341 "nvme_io": true 00:17:47.341 }, 00:17:47.341 "driver_specific": { 00:17:47.341 "nvme": [ 00:17:47.341 { 00:17:47.341 "pci_address": "0000:00:07.0", 00:17:47.341 "trid": { 00:17:47.341 "trtype": "PCIe", 00:17:47.341 "traddr": "0000:00:07.0" 00:17:47.341 }, 00:17:47.341 "ctrlr_data": { 00:17:47.341 "cntlid": 0, 00:17:47.341 "vendor_id": "0x1b36", 00:17:47.341 "model_number": "QEMU NVMe Ctrl", 00:17:47.341 "serial_number": "12341", 00:17:47.341 "firmware_revision": "8.0.0", 00:17:47.341 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.341 "oacs": { 00:17:47.341 "security": 0, 00:17:47.341 "format": 1, 00:17:47.341 "firmware": 0, 00:17:47.341 "ns_manage": 1 00:17:47.341 }, 00:17:47.341 "multi_ctrlr": false, 00:17:47.341 "ana_reporting": false 00:17:47.341 }, 00:17:47.341 "vs": { 00:17:47.341 "nvme_version": "1.4" 00:17:47.341 }, 00:17:47.341 "ns_data": { 00:17:47.341 "id": 1, 00:17:47.341 "can_share": false 00:17:47.341 } 00:17:47.341 } 00:17:47.341 ], 00:17:47.341 "mp_policy": "active_passive" 00:17:47.341 } 00:17:47.341 } 00:17:47.341 ]' 00:17:47.341 12:38:56 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:47.600 12:38:56 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:47.600 12:38:56 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:47.600 12:38:56 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:47.600 12:38:56 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:47.600 12:38:56 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:47.600 12:38:56 -- ftl/common.sh@63 -- # base_size=5120 00:17:47.600 12:38:56 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.600 12:38:56 -- ftl/common.sh@67 -- # clear_lvols 00:17:47.600 12:38:56 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.600 12:38:56 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.956 12:38:56 -- ftl/common.sh@28 -- # stores= 00:17:47.956 12:38:56 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:47.956 12:38:56 -- ftl/common.sh@68 -- # lvs=92ea3959-2d3d-4773-8fe5-93f533590527 00:17:47.956 12:38:56 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 92ea3959-2d3d-4773-8fe5-93f533590527 00:17:48.215 12:38:57 -- ftl/fio.sh@48 -- # split_bdev=028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.215 12:38:57 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.215 12:38:57 -- ftl/common.sh@35 -- # local name=nvc0 00:17:48.215 12:38:57 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:48.215 12:38:57 -- ftl/common.sh@37 -- # local base_bdev=028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.215 12:38:57 -- ftl/common.sh@38 -- # local cache_size= 00:17:48.215 12:38:57 -- ftl/common.sh@41 -- # get_bdev_size 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.216 12:38:57 -- common/autotest_common.sh@1357 -- # local bdev_name=028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.216 12:38:57 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:48.216 12:38:57 -- common/autotest_common.sh@1359 -- # local bs 00:17:48.216 12:38:57 -- common/autotest_common.sh@1360 -- # local nb 00:17:48.216 12:38:57 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.475 12:38:57 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:48.475 { 00:17:48.475 "name": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:48.475 "aliases": [ 00:17:48.475 "lvs/nvme0n1p0" 00:17:48.475 ], 00:17:48.475 "product_name": "Logical Volume", 00:17:48.475 "block_size": 4096, 00:17:48.475 "num_blocks": 26476544, 00:17:48.475 "uuid": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:48.475 "assigned_rate_limits": { 00:17:48.475 "rw_ios_per_sec": 0, 00:17:48.475 "rw_mbytes_per_sec": 0, 00:17:48.475 "r_mbytes_per_sec": 0, 00:17:48.475 "w_mbytes_per_sec": 0 00:17:48.475 }, 00:17:48.475 "claimed": false, 00:17:48.475 "zoned": false, 00:17:48.475 "supported_io_types": { 00:17:48.475 "read": true, 00:17:48.475 "write": true, 00:17:48.475 "unmap": true, 00:17:48.475 "write_zeroes": true, 00:17:48.475 "flush": false, 00:17:48.475 "reset": true, 00:17:48.475 "compare": false, 00:17:48.475 "compare_and_write": false, 00:17:48.475 "abort": false, 00:17:48.475 "nvme_admin": false, 00:17:48.475 "nvme_io": false 00:17:48.475 }, 00:17:48.475 "driver_specific": { 00:17:48.475 "lvol": { 00:17:48.475 "lvol_store_uuid": "92ea3959-2d3d-4773-8fe5-93f533590527", 00:17:48.475 "base_bdev": "nvme0n1", 00:17:48.475 "thin_provision": true, 00:17:48.475 "snapshot": false, 00:17:48.475 "clone": false, 00:17:48.475 "esnap_clone": false 00:17:48.475 } 00:17:48.475 } 00:17:48.475 } 00:17:48.475 ]' 00:17:48.475 12:38:57 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:48.475 12:38:57 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:48.475 12:38:57 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:48.734 12:38:57 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:48.734 12:38:57 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:48.734 12:38:57 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:48.734 12:38:57 -- ftl/common.sh@41 -- # local base_size=5171 00:17:48.734 12:38:57 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:48.734 12:38:57 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:48.993 12:38:57 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:48.993 12:38:57 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:48.993 12:38:57 -- ftl/common.sh@48 -- # get_bdev_size 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.993 12:38:57 -- common/autotest_common.sh@1357 -- # local bdev_name=028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:48.993 12:38:57 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:48.993 12:38:57 -- common/autotest_common.sh@1359 -- # local bs 00:17:48.993 12:38:57 -- common/autotest_common.sh@1360 -- # local nb 00:17:48.993 12:38:57 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:49.252 12:38:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:49.252 { 00:17:49.252 "name": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:49.252 "aliases": [ 00:17:49.252 "lvs/nvme0n1p0" 00:17:49.252 ], 00:17:49.252 "product_name": "Logical Volume", 00:17:49.252 "block_size": 4096, 00:17:49.252 "num_blocks": 26476544, 00:17:49.252 "uuid": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:49.252 "assigned_rate_limits": { 00:17:49.252 "rw_ios_per_sec": 0, 00:17:49.252 "rw_mbytes_per_sec": 0, 00:17:49.252 "r_mbytes_per_sec": 0, 00:17:49.252 "w_mbytes_per_sec": 0 00:17:49.252 }, 00:17:49.252 "claimed": false, 00:17:49.252 "zoned": false, 00:17:49.252 "supported_io_types": { 00:17:49.252 "read": true, 00:17:49.252 "write": true, 00:17:49.252 "unmap": true, 00:17:49.252 "write_zeroes": true, 00:17:49.252 "flush": false, 00:17:49.252 "reset": true, 00:17:49.252 "compare": false, 00:17:49.252 "compare_and_write": false, 00:17:49.252 "abort": false, 00:17:49.252 "nvme_admin": false, 00:17:49.252 "nvme_io": false 00:17:49.252 }, 00:17:49.252 "driver_specific": { 00:17:49.252 "lvol": { 00:17:49.252 "lvol_store_uuid": "92ea3959-2d3d-4773-8fe5-93f533590527", 00:17:49.252 "base_bdev": "nvme0n1", 00:17:49.252 "thin_provision": true, 00:17:49.252 "snapshot": false, 00:17:49.252 "clone": false, 00:17:49.252 "esnap_clone": false 00:17:49.252 } 00:17:49.252 } 00:17:49.252 } 00:17:49.252 ]' 00:17:49.252 12:38:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:49.252 12:38:58 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:49.252 12:38:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:49.252 12:38:58 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:49.252 12:38:58 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:49.252 12:38:58 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:49.252 12:38:58 -- ftl/common.sh@48 -- # cache_size=5171 00:17:49.252 12:38:58 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:49.511 12:38:58 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:49.511 12:38:58 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:49.511 12:38:58 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:49.511 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:49.511 12:38:58 -- ftl/fio.sh@56 -- # get_bdev_size 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:49.511 12:38:58 -- common/autotest_common.sh@1357 -- # local bdev_name=028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:49.511 12:38:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:49.511 12:38:58 -- common/autotest_common.sh@1359 -- # local bs 00:17:49.511 12:38:58 -- common/autotest_common.sh@1360 -- # local nb 00:17:49.511 12:38:58 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 028a34f3-642e-4fe8-9b64-448bd96f6b01 00:17:49.769 12:38:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:49.769 { 00:17:49.769 "name": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:49.769 "aliases": [ 00:17:49.769 "lvs/nvme0n1p0" 00:17:49.769 ], 00:17:49.769 "product_name": "Logical Volume", 00:17:49.769 "block_size": 4096, 00:17:49.769 "num_blocks": 26476544, 00:17:49.769 "uuid": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:49.769 "assigned_rate_limits": { 00:17:49.769 "rw_ios_per_sec": 0, 00:17:49.769 "rw_mbytes_per_sec": 0, 00:17:49.769 "r_mbytes_per_sec": 0, 00:17:49.769 "w_mbytes_per_sec": 0 00:17:49.769 }, 00:17:49.769 "claimed": false, 00:17:49.769 "zoned": false, 00:17:49.769 "supported_io_types": { 00:17:49.769 "read": true, 00:17:49.769 "write": true, 00:17:49.769 "unmap": true, 00:17:49.769 "write_zeroes": true, 00:17:49.769 "flush": false, 00:17:49.769 "reset": true, 00:17:49.769 "compare": false, 00:17:49.769 "compare_and_write": false, 00:17:49.769 "abort": false, 00:17:49.769 "nvme_admin": false, 00:17:49.769 "nvme_io": false 00:17:49.769 }, 00:17:49.769 "driver_specific": { 00:17:49.769 "lvol": { 00:17:49.769 "lvol_store_uuid": "92ea3959-2d3d-4773-8fe5-93f533590527", 00:17:49.769 "base_bdev": "nvme0n1", 00:17:49.769 "thin_provision": true, 00:17:49.769 "snapshot": false, 00:17:49.769 "clone": false, 00:17:49.769 "esnap_clone": false 00:17:49.769 } 00:17:49.769 } 00:17:49.769 } 00:17:49.769 ]' 00:17:49.769 12:38:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:49.769 12:38:58 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:49.769 12:38:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:50.028 12:38:58 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:50.028 12:38:58 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:50.028 12:38:58 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:50.028 12:38:58 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:50.028 12:38:58 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:50.028 12:38:58 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 028a34f3-642e-4fe8-9b64-448bd96f6b01 -c nvc0n1p0 --l2p_dram_limit 60 00:17:50.028 [2024-05-15 12:38:59.025599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-05-15 12:38:59.025672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.028 [2024-05-15 12:38:59.025698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.028 [2024-05-15 12:38:59.025711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-05-15 12:38:59.025803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-05-15 12:38:59.025824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.028 [2024-05-15 12:38:59.025840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:50.028 [2024-05-15 12:38:59.025860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-05-15 12:38:59.025910] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.028 [2024-05-15 12:38:59.026933] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.028 [2024-05-15 12:38:59.026974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-05-15 12:38:59.026989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.028 [2024-05-15 12:38:59.027005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:17:50.028 [2024-05-15 12:38:59.027017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-05-15 12:38:59.027127] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cb80d1cf-58b7-4d38-9d6f-e99c8f811f2b 00:17:50.028 [2024-05-15 12:38:59.028921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-05-15 12:38:59.028966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:50.028 [2024-05-15 12:38:59.028983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:50.028 [2024-05-15 12:38:59.028998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-05-15 12:38:59.038696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-05-15 12:38:59.038753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.028 [2024-05-15 12:38:59.038770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.602 ms 00:17:50.028 [2024-05-15 12:38:59.038785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.038917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.038948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.287 [2024-05-15 12:38:59.038962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:50.287 [2024-05-15 12:38:59.038979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.039069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.039090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.287 [2024-05-15 12:38:59.039103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:50.287 [2024-05-15 12:38:59.039117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.039181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.287 [2024-05-15 12:38:59.044407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.044447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.287 [2024-05-15 12:38:59.044467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:17:50.287 [2024-05-15 12:38:59.044483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.044555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.044572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.287 [2024-05-15 12:38:59.044588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:50.287 [2024-05-15 12:38:59.044600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.044655] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:50.287 [2024-05-15 12:38:59.044816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:50.287 [2024-05-15 12:38:59.044849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.287 [2024-05-15 12:38:59.044868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:50.287 [2024-05-15 12:38:59.044889] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.287 [2024-05-15 12:38:59.044904] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.287 [2024-05-15 12:38:59.044919] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:50.287 [2024-05-15 12:38:59.044931] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.287 [2024-05-15 12:38:59.044946] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:50.287 [2024-05-15 12:38:59.044958] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:50.287 [2024-05-15 12:38:59.044974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.044988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.287 [2024-05-15 12:38:59.045003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:17:50.287 [2024-05-15 12:38:59.045014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.045110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.287 [2024-05-15 12:38:59.045125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.287 [2024-05-15 12:38:59.045140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:50.287 [2024-05-15 12:38:59.045152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.287 [2024-05-15 12:38:59.045258] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.287 [2024-05-15 12:38:59.045283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.287 [2024-05-15 12:38:59.045303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.287 [2024-05-15 12:38:59.045341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.287 [2024-05-15 12:38:59.045387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.287 [2024-05-15 12:38:59.045410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.287 [2024-05-15 12:38:59.045422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:50.287 [2024-05-15 12:38:59.045436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.287 [2024-05-15 12:38:59.045448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.287 [2024-05-15 12:38:59.045461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:50.287 [2024-05-15 12:38:59.045472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.287 [2024-05-15 12:38:59.045523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:50.287 [2024-05-15 12:38:59.045539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:50.287 [2024-05-15 12:38:59.045564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:50.287 [2024-05-15 12:38:59.045575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.287 [2024-05-15 12:38:59.045600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.287 [2024-05-15 12:38:59.045637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.287 [2024-05-15 12:38:59.045672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.287 [2024-05-15 12:38:59.045711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.287 [2024-05-15 12:38:59.045735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.287 [2024-05-15 12:38:59.045745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:50.287 [2024-05-15 12:38:59.045758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.287 [2024-05-15 12:38:59.045769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.288 [2024-05-15 12:38:59.045805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:50.288 [2024-05-15 12:38:59.045822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.288 [2024-05-15 12:38:59.045835] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.288 [2024-05-15 12:38:59.045853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.288 [2024-05-15 12:38:59.045874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.288 [2024-05-15 12:38:59.045886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.288 [2024-05-15 12:38:59.045902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.288 [2024-05-15 12:38:59.045914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.288 [2024-05-15 12:38:59.045927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.288 [2024-05-15 12:38:59.045938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.288 [2024-05-15 12:38:59.045954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.288 [2024-05-15 12:38:59.045967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.288 [2024-05-15 12:38:59.045988] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.288 [2024-05-15 12:38:59.046004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.288 [2024-05-15 12:38:59.046020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:50.288 [2024-05-15 12:38:59.046032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:50.288 [2024-05-15 12:38:59.046046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:50.288 [2024-05-15 12:38:59.046058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:50.288 [2024-05-15 12:38:59.046072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:50.288 [2024-05-15 12:38:59.046084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:50.288 [2024-05-15 12:38:59.046098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:50.288 [2024-05-15 12:38:59.046110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:50.288 [2024-05-15 12:38:59.046125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:50.288 [2024-05-15 12:38:59.046138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:50.288 [2024-05-15 12:38:59.046154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:50.288 [2024-05-15 12:38:59.046166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:50.288 [2024-05-15 12:38:59.046185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:50.288 [2024-05-15 12:38:59.046197] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.288 [2024-05-15 12:38:59.046212] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.288 [2024-05-15 12:38:59.046225] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.288 [2024-05-15 12:38:59.046239] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.288 [2024-05-15 12:38:59.046251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.288 [2024-05-15 12:38:59.046266] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.288 [2024-05-15 12:38:59.046284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.046302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.288 [2024-05-15 12:38:59.046318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:17:50.288 [2024-05-15 12:38:59.046333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.068468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.068557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.288 [2024-05-15 12:38:59.068578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.047 ms 00:17:50.288 [2024-05-15 12:38:59.068593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.068719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.068749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.288 [2024-05-15 12:38:59.068763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:50.288 [2024-05-15 12:38:59.068776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.114616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.114689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.288 [2024-05-15 12:38:59.114710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.756 ms 00:17:50.288 [2024-05-15 12:38:59.114729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.114792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.114811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.288 [2024-05-15 12:38:59.114825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:50.288 [2024-05-15 12:38:59.114839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.115477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.115530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.288 [2024-05-15 12:38:59.115547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:50.288 [2024-05-15 12:38:59.115563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.115734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.115759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.288 [2024-05-15 12:38:59.115772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:50.288 [2024-05-15 12:38:59.115786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.147755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.147834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.288 [2024-05-15 12:38:59.147858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.931 ms 00:17:50.288 [2024-05-15 12:38:59.147874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.162439] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:50.288 [2024-05-15 12:38:59.184316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.184392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.288 [2024-05-15 12:38:59.184417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.245 ms 00:17:50.288 [2024-05-15 12:38:59.184430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.253801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.288 [2024-05-15 12:38:59.253880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:50.288 [2024-05-15 12:38:59.253906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.278 ms 00:17:50.288 [2024-05-15 12:38:59.253924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.288 [2024-05-15 12:38:59.254001] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:50.288 [2024-05-15 12:38:59.254022] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:53.570 [2024-05-15 12:39:02.347673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.347758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:53.570 [2024-05-15 12:39:02.347785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3093.669 ms 00:17:53.570 [2024-05-15 12:39:02.347807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.570 [2024-05-15 12:39:02.348098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.348126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.570 [2024-05-15 12:39:02.348145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:53.570 [2024-05-15 12:39:02.348157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.570 [2024-05-15 12:39:02.378757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.378837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:53.570 [2024-05-15 12:39:02.378862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.512 ms 00:17:53.570 [2024-05-15 12:39:02.378875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.570 [2024-05-15 12:39:02.409091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.409187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:53.570 [2024-05-15 12:39:02.409219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.146 ms 00:17:53.570 [2024-05-15 12:39:02.409232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.570 [2024-05-15 12:39:02.409741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.409778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:53.570 [2024-05-15 12:39:02.409809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:17:53.570 [2024-05-15 12:39:02.409825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.570 [2024-05-15 12:39:02.487861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.570 [2024-05-15 12:39:02.487928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:53.570 [2024-05-15 12:39:02.487980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.943 ms 00:17:53.570 [2024-05-15 12:39:02.487994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.520189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.571 [2024-05-15 12:39:02.520244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:53.571 [2024-05-15 12:39:02.520267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.130 ms 00:17:53.571 [2024-05-15 12:39:02.520280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.524338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.571 [2024-05-15 12:39:02.524379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:53.571 [2024-05-15 12:39:02.524401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:17:53.571 [2024-05-15 12:39:02.524414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.555128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.571 [2024-05-15 12:39:02.555186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.571 [2024-05-15 12:39:02.555224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.636 ms 00:17:53.571 [2024-05-15 12:39:02.555237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.555325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.571 [2024-05-15 12:39:02.555344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.571 [2024-05-15 12:39:02.555377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:53.571 [2024-05-15 12:39:02.555390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.555552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.571 [2024-05-15 12:39:02.555580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.571 [2024-05-15 12:39:02.555598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:53.571 [2024-05-15 12:39:02.555611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.571 [2024-05-15 12:39:02.556892] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3530.792 ms, result 0 00:17:53.571 { 00:17:53.571 "name": "ftl0", 00:17:53.571 "uuid": "cb80d1cf-58b7-4d38-9d6f-e99c8f811f2b" 00:17:53.571 } 00:17:53.829 12:39:02 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:53.829 12:39:02 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:53.829 12:39:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:53.829 12:39:02 -- common/autotest_common.sh@889 -- # local i 00:17:53.829 12:39:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:53.829 12:39:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:53.829 12:39:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.088 12:39:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:54.088 [ 00:17:54.088 { 00:17:54.088 "name": "ftl0", 00:17:54.088 "aliases": [ 00:17:54.088 "cb80d1cf-58b7-4d38-9d6f-e99c8f811f2b" 00:17:54.088 ], 00:17:54.088 "product_name": "FTL disk", 00:17:54.088 "block_size": 4096, 00:17:54.088 "num_blocks": 20971520, 00:17:54.088 "uuid": "cb80d1cf-58b7-4d38-9d6f-e99c8f811f2b", 00:17:54.088 "assigned_rate_limits": { 00:17:54.088 "rw_ios_per_sec": 0, 00:17:54.088 "rw_mbytes_per_sec": 0, 00:17:54.088 "r_mbytes_per_sec": 0, 00:17:54.088 "w_mbytes_per_sec": 0 00:17:54.088 }, 00:17:54.088 "claimed": false, 00:17:54.088 "zoned": false, 00:17:54.088 "supported_io_types": { 00:17:54.088 "read": true, 00:17:54.088 "write": true, 00:17:54.088 "unmap": true, 00:17:54.088 "write_zeroes": true, 00:17:54.088 "flush": true, 00:17:54.088 "reset": false, 00:17:54.088 "compare": false, 00:17:54.088 "compare_and_write": false, 00:17:54.088 "abort": false, 00:17:54.088 "nvme_admin": false, 00:17:54.088 "nvme_io": false 00:17:54.088 }, 00:17:54.088 "driver_specific": { 00:17:54.088 "ftl": { 00:17:54.088 "base_bdev": "028a34f3-642e-4fe8-9b64-448bd96f6b01", 00:17:54.088 "cache": "nvc0n1p0" 00:17:54.088 } 00:17:54.088 } 00:17:54.088 } 00:17:54.088 ] 00:17:54.088 12:39:03 -- common/autotest_common.sh@895 -- # return 0 00:17:54.088 12:39:03 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:54.088 12:39:03 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:54.394 12:39:03 -- ftl/fio.sh@70 -- # echo ']}' 00:17:54.394 12:39:03 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:54.667 [2024-05-15 12:39:03.509559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.509642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.667 [2024-05-15 12:39:03.509664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.667 [2024-05-15 12:39:03.509680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.509734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.667 [2024-05-15 12:39:03.513385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.513418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.667 [2024-05-15 12:39:03.513437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:17:54.667 [2024-05-15 12:39:03.513449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.513950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.513989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.667 [2024-05-15 12:39:03.514007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:17:54.667 [2024-05-15 12:39:03.514032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.517266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.517294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.667 [2024-05-15 12:39:03.517327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:17:54.667 [2024-05-15 12:39:03.517339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.523853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.523891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:54.667 [2024-05-15 12:39:03.523913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.477 ms 00:17:54.667 [2024-05-15 12:39:03.523925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.555261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.555340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.667 [2024-05-15 12:39:03.555365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.221 ms 00:17:54.667 [2024-05-15 12:39:03.555377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.574109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.574164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.667 [2024-05-15 12:39:03.574186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.666 ms 00:17:54.667 [2024-05-15 12:39:03.574199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.574456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.574511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.667 [2024-05-15 12:39:03.574531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:54.667 [2024-05-15 12:39:03.574563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.605353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.605412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:54.667 [2024-05-15 12:39:03.605451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.737 ms 00:17:54.667 [2024-05-15 12:39:03.605464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.635969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.636030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:54.667 [2024-05-15 12:39:03.636051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.418 ms 00:17:54.667 [2024-05-15 12:39:03.636063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.667 [2024-05-15 12:39:03.665735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.667 [2024-05-15 12:39:03.665795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.667 [2024-05-15 12:39:03.665817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.610 ms 00:17:54.667 [2024-05-15 12:39:03.665830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.927 [2024-05-15 12:39:03.695558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.927 [2024-05-15 12:39:03.695608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.927 [2024-05-15 12:39:03.695644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.586 ms 00:17:54.927 [2024-05-15 12:39:03.695657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.927 [2024-05-15 12:39:03.695719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.927 [2024-05-15 12:39:03.695743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.695999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.927 [2024-05-15 12:39:03.696348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.696999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.928 [2024-05-15 12:39:03.697257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.928 [2024-05-15 12:39:03.697276] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb80d1cf-58b7-4d38-9d6f-e99c8f811f2b 00:17:54.928 [2024-05-15 12:39:03.697289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.928 [2024-05-15 12:39:03.697303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.928 [2024-05-15 12:39:03.697315] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.928 [2024-05-15 12:39:03.697329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.928 [2024-05-15 12:39:03.697341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.928 [2024-05-15 12:39:03.697355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.928 [2024-05-15 12:39:03.697367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.928 [2024-05-15 12:39:03.697380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.928 [2024-05-15 12:39:03.697391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.928 [2024-05-15 12:39:03.697407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.928 [2024-05-15 12:39:03.697419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.928 [2024-05-15 12:39:03.697434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:17:54.928 [2024-05-15 12:39:03.697446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.714441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.928 [2024-05-15 12:39:03.714484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.928 [2024-05-15 12:39:03.714524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.894 ms 00:17:54.928 [2024-05-15 12:39:03.714538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.714808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.928 [2024-05-15 12:39:03.714831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.928 [2024-05-15 12:39:03.714849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:17:54.928 [2024-05-15 12:39:03.714864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.774914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.774986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.928 [2024-05-15 12:39:03.775009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.775023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.775125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.775160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.928 [2024-05-15 12:39:03.775177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.775193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.775342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.775371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.928 [2024-05-15 12:39:03.775388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.775400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.775439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.775459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.928 [2024-05-15 12:39:03.775474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.775486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.895288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.895361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.928 [2024-05-15 12:39:03.895400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.895414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.934916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.928 [2024-05-15 12:39:03.935034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.928 [2024-05-15 12:39:03.935211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.928 [2024-05-15 12:39:03.935351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.928 [2024-05-15 12:39:03.935580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.928 [2024-05-15 12:39:03.935710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.928 [2024-05-15 12:39:03.935824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.935909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.928 [2024-05-15 12:39:03.935926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.928 [2024-05-15 12:39:03.935941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.928 [2024-05-15 12:39:03.935953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.928 [2024-05-15 12:39:03.936166] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.583 ms, result 0 00:17:55.187 true 00:17:55.187 12:39:03 -- ftl/fio.sh@75 -- # killprocess 72545 00:17:55.187 12:39:03 -- common/autotest_common.sh@926 -- # '[' -z 72545 ']' 00:17:55.187 12:39:03 -- common/autotest_common.sh@930 -- # kill -0 72545 00:17:55.187 12:39:03 -- common/autotest_common.sh@931 -- # uname 00:17:55.187 12:39:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:55.187 12:39:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72545 00:17:55.187 12:39:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:55.187 12:39:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:55.187 killing process with pid 72545 00:17:55.187 12:39:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72545' 00:17:55.187 12:39:03 -- common/autotest_common.sh@945 -- # kill 72545 00:17:55.187 12:39:03 -- common/autotest_common.sh@950 -- # wait 72545 00:18:00.448 12:39:08 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:00.448 12:39:08 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:00.448 12:39:08 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:00.448 12:39:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:00.448 12:39:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.448 12:39:08 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:00.448 12:39:08 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:00.448 12:39:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:00.448 12:39:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:00.448 12:39:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:00.448 12:39:08 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.448 12:39:08 -- common/autotest_common.sh@1320 -- # shift 00:18:00.448 12:39:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:00.448 12:39:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:00.448 12:39:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:00.448 12:39:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.448 12:39:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:00.448 12:39:08 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:00.448 12:39:08 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:00.448 12:39:08 -- common/autotest_common.sh@1326 -- # break 00:18:00.448 12:39:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:00.448 12:39:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:00.448 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:00.448 fio-3.35 00:18:00.448 Starting 1 thread 00:18:05.716 00:18:05.716 test: (groupid=0, jobs=1): err= 0: pid=72767: Wed May 15 12:39:14 2024 00:18:05.716 read: IOPS=956, BW=63.5MiB/s (66.6MB/s)(255MiB/4006msec) 00:18:05.716 slat (nsec): min=5422, max=30320, avg=6815.61, stdev=2336.00 00:18:05.716 clat (usec): min=320, max=813, avg=460.56, stdev=46.22 00:18:05.716 lat (usec): min=326, max=818, avg=467.38, stdev=46.89 00:18:05.716 clat percentiles (usec): 00:18:05.716 | 1.00th=[ 363], 5.00th=[ 400], 10.00th=[ 420], 20.00th=[ 433], 00:18:05.716 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:18:05.716 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 545], 00:18:05.716 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 652], 99.95th=[ 742], 00:18:05.716 | 99.99th=[ 816] 00:18:05.716 write: IOPS=963, BW=64.0MiB/s (67.1MB/s)(256MiB/4002msec); 0 zone resets 00:18:05.716 slat (nsec): min=19220, max=88227, avg=23387.04, stdev=4870.33 00:18:05.716 clat (usec): min=373, max=3894, avg=537.05, stdev=80.33 00:18:05.716 lat (usec): min=395, max=3929, avg=560.44, stdev=80.90 00:18:05.716 clat percentiles (usec): 00:18:05.716 | 1.00th=[ 420], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 490], 00:18:05.716 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 537], 00:18:05.716 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 627], 00:18:05.716 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 914], 99.95th=[ 955], 00:18:05.716 | 99.99th=[ 3884] 00:18:05.716 bw ( KiB/s): min=63240, max=67048, per=100.00%, avg=65535.00, stdev=1372.45, samples=8 00:18:05.716 iops : min= 930, max= 986, avg=963.75, stdev=20.18, samples=8 00:18:05.716 lat (usec) : 500=51.01%, 750=48.33%, 1000=0.65% 00:18:05.716 lat (msec) : 4=0.01% 00:18:05.716 cpu : usr=99.40%, sys=0.10%, ctx=8, majf=0, minf=1318 00:18:05.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.716 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.716 00:18:05.716 Run status group 0 (all jobs): 00:18:05.716 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=255MiB (267MB), run=4006-4006msec 00:18:05.716 WRITE: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=256MiB (269MB), run=4002-4002msec 00:18:07.091 ----------------------------------------------------- 00:18:07.092 Suppressions used: 00:18:07.092 count bytes template 00:18:07.092 1 5 /usr/src/fio/parse.c 00:18:07.092 1 8 libtcmalloc_minimal.so 00:18:07.092 1 904 libcrypto.so 00:18:07.092 ----------------------------------------------------- 00:18:07.092 00:18:07.092 12:39:15 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:07.092 12:39:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:07.092 12:39:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.092 12:39:15 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:07.092 12:39:15 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:07.092 12:39:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:07.092 12:39:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.092 12:39:15 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:07.092 12:39:15 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:07.092 12:39:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:07.092 12:39:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:07.092 12:39:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:07.092 12:39:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.092 12:39:15 -- common/autotest_common.sh@1320 -- # shift 00:18:07.092 12:39:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:07.092 12:39:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:07.092 12:39:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.092 12:39:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:07.092 12:39:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:07.092 12:39:15 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:07.092 12:39:15 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:07.092 12:39:15 -- common/autotest_common.sh@1326 -- # break 00:18:07.092 12:39:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:07.092 12:39:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:07.350 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:07.350 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:07.350 fio-3.35 00:18:07.350 Starting 2 threads 00:18:39.478 00:18:39.478 first_half: (groupid=0, jobs=1): err= 0: pid=72871: Wed May 15 12:39:44 2024 00:18:39.478 read: IOPS=2421, BW=9687KiB/s (9919kB/s)(256MiB/27037msec) 00:18:39.478 slat (nsec): min=4365, max=74429, avg=8038.52, stdev=2783.76 00:18:39.478 clat (usec): min=833, max=293293, avg=44921.25, stdev=28360.16 00:18:39.478 lat (usec): min=838, max=293302, avg=44929.29, stdev=28360.46 00:18:39.478 clat percentiles (msec): 00:18:39.478 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:18:39.478 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 39], 00:18:39.478 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 87], 00:18:39.478 | 99.00th=[ 194], 99.50th=[ 215], 99.90th=[ 257], 99.95th=[ 271], 00:18:39.478 | 99.99th=[ 284] 00:18:39.478 write: IOPS=2427, BW=9712KiB/s (9945kB/s)(256MiB/26992msec); 0 zone resets 00:18:39.478 slat (usec): min=5, max=245, avg= 8.76, stdev= 4.62 00:18:39.478 clat (usec): min=444, max=50559, avg=7890.92, stdev=7664.04 00:18:39.478 lat (usec): min=455, max=50566, avg=7899.68, stdev=7664.13 00:18:39.478 clat percentiles (usec): 00:18:39.478 | 1.00th=[ 1090], 5.00th=[ 1450], 10.00th=[ 1827], 20.00th=[ 3294], 00:18:39.478 | 30.00th=[ 4293], 40.00th=[ 5407], 50.00th=[ 6128], 60.00th=[ 6980], 00:18:39.478 | 70.00th=[ 7635], 80.00th=[ 9110], 90.00th=[14615], 95.00th=[21890], 00:18:39.478 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46400], 99.95th=[47449], 00:18:39.478 | 99.99th=[49546] 00:18:39.478 bw ( KiB/s): min= 392, max=44880, per=100.00%, avg=23668.73, stdev=14701.05, samples=22 00:18:39.478 iops : min= 98, max=11220, avg=5917.18, stdev=3675.26, samples=22 00:18:39.478 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.26% 00:18:39.478 lat (msec) : 2=5.47%, 4=7.67%, 10=27.73%, 20=7.65%, 50=46.38% 00:18:39.478 lat (msec) : 100=2.57%, 250=2.17%, 500=0.06% 00:18:39.478 cpu : usr=99.19%, sys=0.21%, ctx=49, majf=0, minf=5560 00:18:39.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:39.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.478 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.478 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.478 second_half: (groupid=0, jobs=1): err= 0: pid=72872: Wed May 15 12:39:44 2024 00:18:39.478 read: IOPS=2446, BW=9787KiB/s (10.0MB/s)(256MiB/26766msec) 00:18:39.478 slat (nsec): min=4763, max=77850, avg=7587.39, stdev=2026.25 00:18:39.478 clat (msec): min=10, max=250, avg=45.12, stdev=25.01 00:18:39.478 lat (msec): min=10, max=250, avg=45.13, stdev=25.01 00:18:39.478 clat percentiles (msec): 00:18:39.478 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:18:39.478 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 40], 00:18:39.478 | 70.00th=[ 42], 80.00th=[ 44], 90.00th=[ 51], 95.00th=[ 81], 00:18:39.478 | 99.00th=[ 180], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 222], 00:18:39.478 | 99.99th=[ 239] 00:18:39.478 write: IOPS=2464, BW=9856KiB/s (10.1MB/s)(256MiB/26597msec); 0 zone resets 00:18:39.478 slat (usec): min=5, max=456, avg= 8.44, stdev= 5.15 00:18:39.478 clat (usec): min=471, max=42920, avg=7163.84, stdev=4407.30 00:18:39.478 lat (usec): min=481, max=42927, avg=7172.28, stdev=4407.56 00:18:39.478 clat percentiles (usec): 00:18:39.478 | 1.00th=[ 1270], 5.00th=[ 2089], 10.00th=[ 3097], 20.00th=[ 3884], 00:18:39.478 | 30.00th=[ 4948], 40.00th=[ 5669], 50.00th=[ 6456], 60.00th=[ 7111], 00:18:39.478 | 70.00th=[ 7767], 80.00th=[ 9110], 90.00th=[13042], 95.00th=[15139], 00:18:39.478 | 99.00th=[21365], 99.50th=[32637], 99.90th=[40633], 99.95th=[41681], 00:18:39.478 | 99.99th=[42730] 00:18:39.478 bw ( KiB/s): min= 848, max=42640, per=100.00%, avg=21761.00, stdev=11604.34, samples=24 00:18:39.478 iops : min= 212, max=10660, avg=5440.25, stdev=2901.09, samples=24 00:18:39.478 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.16% 00:18:39.478 lat (msec) : 2=2.05%, 4=8.44%, 10=30.76%, 20=8.01%, 50=45.25% 00:18:39.478 lat (msec) : 100=3.19%, 250=2.08%, 500=0.01% 00:18:39.478 cpu : usr=99.31%, sys=0.11%, ctx=48, majf=0, minf=5559 00:18:39.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:39.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.478 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.478 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.478 00:18:39.478 Run status group 0 (all jobs): 00:18:39.478 READ: bw=18.9MiB/s (19.8MB/s), 9687KiB/s-9787KiB/s (9919kB/s-10.0MB/s), io=512MiB (536MB), run=26766-27037msec 00:18:39.478 WRITE: bw=19.0MiB/s (19.9MB/s), 9712KiB/s-9856KiB/s (9945kB/s-10.1MB/s), io=512MiB (537MB), run=26597-26992msec 00:18:39.478 ----------------------------------------------------- 00:18:39.478 Suppressions used: 00:18:39.478 count bytes template 00:18:39.478 2 10 /usr/src/fio/parse.c 00:18:39.478 3 288 /usr/src/fio/iolog.c 00:18:39.478 1 8 libtcmalloc_minimal.so 00:18:39.478 1 904 libcrypto.so 00:18:39.478 ----------------------------------------------------- 00:18:39.478 00:18:39.478 12:39:46 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:39.478 12:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:39.478 12:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:39.478 12:39:46 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:39.478 12:39:46 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:39.478 12:39:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:39.478 12:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:39.478 12:39:46 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.478 12:39:46 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.478 12:39:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:39.478 12:39:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.478 12:39:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:39.478 12:39:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.478 12:39:46 -- common/autotest_common.sh@1320 -- # shift 00:18:39.478 12:39:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:39.478 12:39:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.478 12:39:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.478 12:39:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:39.478 12:39:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:39.478 12:39:46 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.478 12:39:46 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.478 12:39:46 -- common/autotest_common.sh@1326 -- # break 00:18:39.478 12:39:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.478 12:39:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:39.478 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:39.478 fio-3.35 00:18:39.478 Starting 1 thread 00:18:57.557 00:18:57.557 test: (groupid=0, jobs=1): err= 0: pid=73213: Wed May 15 12:40:03 2024 00:18:57.557 read: IOPS=6592, BW=25.8MiB/s (27.0MB/s)(255MiB/9891msec) 00:18:57.557 slat (nsec): min=4594, max=44325, avg=6679.81, stdev=1621.49 00:18:57.557 clat (usec): min=745, max=38117, avg=19407.03, stdev=1357.66 00:18:57.557 lat (usec): min=750, max=38122, avg=19413.71, stdev=1357.66 00:18:57.557 clat percentiles (usec): 00:18:57.557 | 1.00th=[17957], 5.00th=[18220], 10.00th=[18482], 20.00th=[18744], 00:18:57.557 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:18:57.557 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20317], 95.00th=[21890], 00:18:57.557 | 99.00th=[24773], 99.50th=[26346], 99.90th=[30802], 99.95th=[33817], 00:18:57.557 | 99.99th=[37487] 00:18:57.557 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(256MiB/5323msec); 0 zone resets 00:18:57.557 slat (usec): min=5, max=720, avg= 9.00, stdev= 4.89 00:18:57.557 clat (usec): min=592, max=58798, avg=10340.62, stdev=13000.07 00:18:57.557 lat (usec): min=636, max=58806, avg=10349.63, stdev=13000.08 00:18:57.557 clat percentiles (usec): 00:18:57.557 | 1.00th=[ 922], 5.00th=[ 1106], 10.00th=[ 1221], 20.00th=[ 1401], 00:18:57.557 | 30.00th=[ 1614], 40.00th=[ 2114], 50.00th=[ 6915], 60.00th=[ 7767], 00:18:57.557 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[36963], 95.00th=[41157], 00:18:57.557 | 99.00th=[45876], 99.50th=[46924], 99.90th=[48497], 99.95th=[49546], 00:18:57.557 | 99.99th=[53740] 00:18:57.557 bw ( KiB/s): min=26592, max=67864, per=96.78%, avg=47662.55, stdev=11359.01, samples=11 00:18:57.557 iops : min= 6648, max=16966, avg=11915.64, stdev=2839.75, samples=11 00:18:57.557 lat (usec) : 750=0.04%, 1000=1.10% 00:18:57.557 lat (msec) : 2=18.42%, 4=1.45%, 10=17.49%, 20=46.09%, 50=15.38% 00:18:57.557 lat (msec) : 100=0.02% 00:18:57.557 cpu : usr=99.15%, sys=0.28%, ctx=28, majf=0, minf=5567 00:18:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.557 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.557 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.557 00:18:57.557 Run status group 0 (all jobs): 00:18:57.557 READ: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=255MiB (267MB), run=9891-9891msec 00:18:57.557 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=256MiB (268MB), run=5323-5323msec 00:18:57.557 ----------------------------------------------------- 00:18:57.557 Suppressions used: 00:18:57.557 count bytes template 00:18:57.557 1 5 /usr/src/fio/parse.c 00:18:57.557 2 192 /usr/src/fio/iolog.c 00:18:57.557 1 8 libtcmalloc_minimal.so 00:18:57.557 1 904 libcrypto.so 00:18:57.557 ----------------------------------------------------- 00:18:57.557 00:18:57.557 12:40:05 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:57.557 12:40:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:57.557 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.557 12:40:05 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:57.557 12:40:05 -- ftl/fio.sh@85 -- # remove_shm 00:18:57.557 Remove shared memory files 00:18:57.557 12:40:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:57.557 12:40:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:57.557 12:40:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:57.557 12:40:05 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57061 /dev/shm/spdk_tgt_trace.pid71456 00:18:57.557 12:40:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:57.557 12:40:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:57.557 ************************************ 00:18:57.557 END TEST ftl_fio_basic 00:18:57.557 ************************************ 00:18:57.557 00:18:57.557 real 1m11.449s 00:18:57.557 user 2m38.238s 00:18:57.557 sys 0m4.091s 00:18:57.557 12:40:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.557 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.557 12:40:05 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:57.557 12:40:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:57.557 12:40:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.557 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.557 ************************************ 00:18:57.557 START TEST ftl_bdevperf 00:18:57.557 ************************************ 00:18:57.557 12:40:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:57.557 * Looking for test storage... 00:18:57.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.557 12:40:05 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:57.557 12:40:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:57.557 12:40:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.557 12:40:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.557 12:40:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:57.557 12:40:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:57.557 12:40:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.557 12:40:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:57.557 12:40:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:57.557 12:40:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.557 12:40:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.557 12:40:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:57.557 12:40:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:57.557 12:40:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.557 12:40:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.557 12:40:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:57.557 12:40:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:57.557 12:40:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.557 12:40:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.557 12:40:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:57.557 12:40:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:57.557 12:40:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.557 12:40:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.557 12:40:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.557 12:40:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.557 12:40:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:57.557 12:40:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:57.557 12:40:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.558 12:40:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@13 -- # use_append= 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:57.558 12:40:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:57.558 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=73460 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:57.558 12:40:05 -- ftl/bdevperf.sh@22 -- # waitforlisten 73460 00:18:57.558 12:40:05 -- common/autotest_common.sh@819 -- # '[' -z 73460 ']' 00:18:57.558 12:40:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.558 12:40:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:57.558 12:40:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.558 12:40:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:57.558 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.558 [2024-05-15 12:40:05.625921] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:57.558 [2024-05-15 12:40:05.626276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73460 ] 00:18:57.558 [2024-05-15 12:40:05.797458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.558 [2024-05-15 12:40:06.036263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.558 12:40:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:57.558 12:40:06 -- common/autotest_common.sh@852 -- # return 0 00:18:57.558 12:40:06 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:57.558 12:40:06 -- ftl/common.sh@54 -- # local name=nvme0 00:18:57.558 12:40:06 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:57.558 12:40:06 -- ftl/common.sh@56 -- # local size=103424 00:18:57.558 12:40:06 -- ftl/common.sh@59 -- # local base_bdev 00:18:57.558 12:40:06 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:58.125 12:40:06 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:58.125 12:40:06 -- ftl/common.sh@62 -- # local base_size 00:18:58.125 12:40:06 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:58.125 12:40:06 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:58.125 12:40:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:58.125 12:40:06 -- common/autotest_common.sh@1359 -- # local bs 00:18:58.125 12:40:06 -- common/autotest_common.sh@1360 -- # local nb 00:18:58.125 12:40:06 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:58.125 12:40:07 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:58.125 { 00:18:58.125 "name": "nvme0n1", 00:18:58.125 "aliases": [ 00:18:58.125 "58cbf2da-50d4-4d0a-a190-198a55dbad45" 00:18:58.125 ], 00:18:58.125 "product_name": "NVMe disk", 00:18:58.125 "block_size": 4096, 00:18:58.125 "num_blocks": 1310720, 00:18:58.125 "uuid": "58cbf2da-50d4-4d0a-a190-198a55dbad45", 00:18:58.125 "assigned_rate_limits": { 00:18:58.125 "rw_ios_per_sec": 0, 00:18:58.125 "rw_mbytes_per_sec": 0, 00:18:58.125 "r_mbytes_per_sec": 0, 00:18:58.125 "w_mbytes_per_sec": 0 00:18:58.125 }, 00:18:58.125 "claimed": true, 00:18:58.125 "claim_type": "read_many_write_one", 00:18:58.125 "zoned": false, 00:18:58.125 "supported_io_types": { 00:18:58.125 "read": true, 00:18:58.125 "write": true, 00:18:58.125 "unmap": true, 00:18:58.125 "write_zeroes": true, 00:18:58.125 "flush": true, 00:18:58.125 "reset": true, 00:18:58.125 "compare": true, 00:18:58.125 "compare_and_write": false, 00:18:58.125 "abort": true, 00:18:58.125 "nvme_admin": true, 00:18:58.125 "nvme_io": true 00:18:58.125 }, 00:18:58.125 "driver_specific": { 00:18:58.125 "nvme": [ 00:18:58.125 { 00:18:58.125 "pci_address": "0000:00:07.0", 00:18:58.125 "trid": { 00:18:58.125 "trtype": "PCIe", 00:18:58.125 "traddr": "0000:00:07.0" 00:18:58.125 }, 00:18:58.125 "ctrlr_data": { 00:18:58.125 "cntlid": 0, 00:18:58.125 "vendor_id": "0x1b36", 00:18:58.125 "model_number": "QEMU NVMe Ctrl", 00:18:58.125 "serial_number": "12341", 00:18:58.125 "firmware_revision": "8.0.0", 00:18:58.125 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:58.125 "oacs": { 00:18:58.125 "security": 0, 00:18:58.125 "format": 1, 00:18:58.125 "firmware": 0, 00:18:58.125 "ns_manage": 1 00:18:58.125 }, 00:18:58.125 "multi_ctrlr": false, 00:18:58.125 "ana_reporting": false 00:18:58.125 }, 00:18:58.125 "vs": { 00:18:58.125 "nvme_version": "1.4" 00:18:58.125 }, 00:18:58.125 "ns_data": { 00:18:58.125 "id": 1, 00:18:58.125 "can_share": false 00:18:58.125 } 00:18:58.125 } 00:18:58.125 ], 00:18:58.125 "mp_policy": "active_passive" 00:18:58.125 } 00:18:58.125 } 00:18:58.125 ]' 00:18:58.125 12:40:07 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:58.125 12:40:07 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:58.125 12:40:07 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:58.408 12:40:07 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:58.408 12:40:07 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:58.408 12:40:07 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:58.408 12:40:07 -- ftl/common.sh@63 -- # base_size=5120 00:18:58.408 12:40:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:58.408 12:40:07 -- ftl/common.sh@67 -- # clear_lvols 00:18:58.408 12:40:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:58.408 12:40:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:58.408 12:40:07 -- ftl/common.sh@28 -- # stores=92ea3959-2d3d-4773-8fe5-93f533590527 00:18:58.408 12:40:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:58.408 12:40:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92ea3959-2d3d-4773-8fe5-93f533590527 00:18:58.667 12:40:07 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.926 12:40:07 -- ftl/common.sh@68 -- # lvs=bfe1ec43-d7ee-410a-8565-61354db8e1c0 00:18:58.926 12:40:07 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bfe1ec43-d7ee-410a-8565-61354db8e1c0 00:18:59.185 12:40:08 -- ftl/bdevperf.sh@23 -- # split_bdev=3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.185 12:40:08 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.185 12:40:08 -- ftl/common.sh@35 -- # local name=nvc0 00:18:59.185 12:40:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:59.185 12:40:08 -- ftl/common.sh@37 -- # local base_bdev=3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.185 12:40:08 -- ftl/common.sh@38 -- # local cache_size= 00:18:59.185 12:40:08 -- ftl/common.sh@41 -- # get_bdev_size 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.185 12:40:08 -- common/autotest_common.sh@1357 -- # local bdev_name=3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.185 12:40:08 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:59.185 12:40:08 -- common/autotest_common.sh@1359 -- # local bs 00:18:59.185 12:40:08 -- common/autotest_common.sh@1360 -- # local nb 00:18:59.185 12:40:08 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.443 12:40:08 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:59.443 { 00:18:59.443 "name": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:18:59.443 "aliases": [ 00:18:59.443 "lvs/nvme0n1p0" 00:18:59.443 ], 00:18:59.443 "product_name": "Logical Volume", 00:18:59.443 "block_size": 4096, 00:18:59.443 "num_blocks": 26476544, 00:18:59.443 "uuid": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:18:59.443 "assigned_rate_limits": { 00:18:59.443 "rw_ios_per_sec": 0, 00:18:59.443 "rw_mbytes_per_sec": 0, 00:18:59.443 "r_mbytes_per_sec": 0, 00:18:59.443 "w_mbytes_per_sec": 0 00:18:59.443 }, 00:18:59.443 "claimed": false, 00:18:59.443 "zoned": false, 00:18:59.443 "supported_io_types": { 00:18:59.443 "read": true, 00:18:59.443 "write": true, 00:18:59.443 "unmap": true, 00:18:59.443 "write_zeroes": true, 00:18:59.443 "flush": false, 00:18:59.443 "reset": true, 00:18:59.443 "compare": false, 00:18:59.443 "compare_and_write": false, 00:18:59.443 "abort": false, 00:18:59.443 "nvme_admin": false, 00:18:59.443 "nvme_io": false 00:18:59.443 }, 00:18:59.443 "driver_specific": { 00:18:59.443 "lvol": { 00:18:59.443 "lvol_store_uuid": "bfe1ec43-d7ee-410a-8565-61354db8e1c0", 00:18:59.443 "base_bdev": "nvme0n1", 00:18:59.443 "thin_provision": true, 00:18:59.443 "snapshot": false, 00:18:59.443 "clone": false, 00:18:59.443 "esnap_clone": false 00:18:59.443 } 00:18:59.443 } 00:18:59.443 } 00:18:59.443 ]' 00:18:59.443 12:40:08 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:59.700 12:40:08 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:59.700 12:40:08 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:59.700 12:40:08 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:59.700 12:40:08 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:59.700 12:40:08 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:59.700 12:40:08 -- ftl/common.sh@41 -- # local base_size=5171 00:18:59.700 12:40:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:59.700 12:40:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:59.957 12:40:08 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.957 12:40:08 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.957 12:40:08 -- ftl/common.sh@48 -- # get_bdev_size 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.957 12:40:08 -- common/autotest_common.sh@1357 -- # local bdev_name=3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:18:59.957 12:40:08 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:59.957 12:40:08 -- common/autotest_common.sh@1359 -- # local bs 00:18:59.957 12:40:08 -- common/autotest_common.sh@1360 -- # local nb 00:18:59.957 12:40:08 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:19:00.214 12:40:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:00.214 { 00:19:00.214 "name": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:19:00.214 "aliases": [ 00:19:00.214 "lvs/nvme0n1p0" 00:19:00.214 ], 00:19:00.214 "product_name": "Logical Volume", 00:19:00.214 "block_size": 4096, 00:19:00.214 "num_blocks": 26476544, 00:19:00.214 "uuid": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:19:00.214 "assigned_rate_limits": { 00:19:00.214 "rw_ios_per_sec": 0, 00:19:00.214 "rw_mbytes_per_sec": 0, 00:19:00.214 "r_mbytes_per_sec": 0, 00:19:00.214 "w_mbytes_per_sec": 0 00:19:00.214 }, 00:19:00.214 "claimed": false, 00:19:00.214 "zoned": false, 00:19:00.214 "supported_io_types": { 00:19:00.214 "read": true, 00:19:00.214 "write": true, 00:19:00.214 "unmap": true, 00:19:00.214 "write_zeroes": true, 00:19:00.214 "flush": false, 00:19:00.214 "reset": true, 00:19:00.214 "compare": false, 00:19:00.214 "compare_and_write": false, 00:19:00.214 "abort": false, 00:19:00.214 "nvme_admin": false, 00:19:00.214 "nvme_io": false 00:19:00.214 }, 00:19:00.214 "driver_specific": { 00:19:00.214 "lvol": { 00:19:00.214 "lvol_store_uuid": "bfe1ec43-d7ee-410a-8565-61354db8e1c0", 00:19:00.214 "base_bdev": "nvme0n1", 00:19:00.214 "thin_provision": true, 00:19:00.214 "snapshot": false, 00:19:00.214 "clone": false, 00:19:00.214 "esnap_clone": false 00:19:00.214 } 00:19:00.214 } 00:19:00.214 } 00:19:00.214 ]' 00:19:00.214 12:40:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:00.214 12:40:09 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:00.214 12:40:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:00.214 12:40:09 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:00.214 12:40:09 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:00.214 12:40:09 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:00.214 12:40:09 -- ftl/common.sh@48 -- # cache_size=5171 00:19:00.214 12:40:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:00.472 12:40:09 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:19:00.472 12:40:09 -- ftl/bdevperf.sh@26 -- # get_bdev_size 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:19:00.472 12:40:09 -- common/autotest_common.sh@1357 -- # local bdev_name=3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:19:00.472 12:40:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:00.472 12:40:09 -- common/autotest_common.sh@1359 -- # local bs 00:19:00.472 12:40:09 -- common/autotest_common.sh@1360 -- # local nb 00:19:00.472 12:40:09 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3747b56c-891b-4b4f-bf2e-d6ff60d63113 00:19:00.730 12:40:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:00.730 { 00:19:00.730 "name": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:19:00.730 "aliases": [ 00:19:00.730 "lvs/nvme0n1p0" 00:19:00.730 ], 00:19:00.730 "product_name": "Logical Volume", 00:19:00.730 "block_size": 4096, 00:19:00.730 "num_blocks": 26476544, 00:19:00.730 "uuid": "3747b56c-891b-4b4f-bf2e-d6ff60d63113", 00:19:00.730 "assigned_rate_limits": { 00:19:00.730 "rw_ios_per_sec": 0, 00:19:00.730 "rw_mbytes_per_sec": 0, 00:19:00.730 "r_mbytes_per_sec": 0, 00:19:00.730 "w_mbytes_per_sec": 0 00:19:00.730 }, 00:19:00.730 "claimed": false, 00:19:00.730 "zoned": false, 00:19:00.730 "supported_io_types": { 00:19:00.730 "read": true, 00:19:00.730 "write": true, 00:19:00.730 "unmap": true, 00:19:00.730 "write_zeroes": true, 00:19:00.730 "flush": false, 00:19:00.730 "reset": true, 00:19:00.730 "compare": false, 00:19:00.730 "compare_and_write": false, 00:19:00.730 "abort": false, 00:19:00.730 "nvme_admin": false, 00:19:00.730 "nvme_io": false 00:19:00.730 }, 00:19:00.730 "driver_specific": { 00:19:00.730 "lvol": { 00:19:00.730 "lvol_store_uuid": "bfe1ec43-d7ee-410a-8565-61354db8e1c0", 00:19:00.730 "base_bdev": "nvme0n1", 00:19:00.730 "thin_provision": true, 00:19:00.730 "snapshot": false, 00:19:00.730 "clone": false, 00:19:00.730 "esnap_clone": false 00:19:00.730 } 00:19:00.730 } 00:19:00.730 } 00:19:00.730 ]' 00:19:00.730 12:40:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:00.988 12:40:09 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:00.988 12:40:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:00.988 12:40:09 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:00.988 12:40:09 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:00.988 12:40:09 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:00.988 12:40:09 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:19:00.988 12:40:09 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3747b56c-891b-4b4f-bf2e-d6ff60d63113 -c nvc0n1p0 --l2p_dram_limit 20 00:19:01.247 [2024-05-15 12:40:10.057147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.057223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:01.247 [2024-05-15 12:40:10.057250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:01.247 [2024-05-15 12:40:10.057264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.057345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.057365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.247 [2024-05-15 12:40:10.057381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:01.247 [2024-05-15 12:40:10.057394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.057424] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:01.247 [2024-05-15 12:40:10.058465] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:01.247 [2024-05-15 12:40:10.058523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.058539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.247 [2024-05-15 12:40:10.058554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:01.247 [2024-05-15 12:40:10.058566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.058721] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4f7c2f6b-a0f0-4f42-97ff-f0ad133c13b9 00:19:01.247 [2024-05-15 12:40:10.060620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.060683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:01.247 [2024-05-15 12:40:10.060701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:01.247 [2024-05-15 12:40:10.060716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.071159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.071226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.247 [2024-05-15 12:40:10.071259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.392 ms 00:19:01.247 [2024-05-15 12:40:10.071277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.071394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.071416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.247 [2024-05-15 12:40:10.071429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:01.247 [2024-05-15 12:40:10.071447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.071566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.247 [2024-05-15 12:40:10.071591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.247 [2024-05-15 12:40:10.071604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:01.247 [2024-05-15 12:40:10.071617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.247 [2024-05-15 12:40:10.071672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:01.247 [2024-05-15 12:40:10.076947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.076988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.248 [2024-05-15 12:40:10.077014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.288 ms 00:19:01.248 [2024-05-15 12:40:10.077056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.248 [2024-05-15 12:40:10.077117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.077131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.248 [2024-05-15 12:40:10.077145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:01.248 [2024-05-15 12:40:10.077156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.248 [2024-05-15 12:40:10.077196] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:01.248 [2024-05-15 12:40:10.077331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:01.248 [2024-05-15 12:40:10.077356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.248 [2024-05-15 12:40:10.077371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:01.248 [2024-05-15 12:40:10.077387] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.248 [2024-05-15 12:40:10.077400] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.248 [2024-05-15 12:40:10.077414] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:01.248 [2024-05-15 12:40:10.077425] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.248 [2024-05-15 12:40:10.077440] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:01.248 [2024-05-15 12:40:10.077451] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:01.248 [2024-05-15 12:40:10.077464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.077479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.248 [2024-05-15 12:40:10.077492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:19:01.248 [2024-05-15 12:40:10.077554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.248 [2024-05-15 12:40:10.077638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.077654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.248 [2024-05-15 12:40:10.077668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:01.248 [2024-05-15 12:40:10.077680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.248 [2024-05-15 12:40:10.077775] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.248 [2024-05-15 12:40:10.077791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.248 [2024-05-15 12:40:10.077826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.248 [2024-05-15 12:40:10.077841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.248 [2024-05-15 12:40:10.077859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.248 [2024-05-15 12:40:10.077871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.248 [2024-05-15 12:40:10.077885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:01.248 [2024-05-15 12:40:10.077896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.248 [2024-05-15 12:40:10.077925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:01.248 [2024-05-15 12:40:10.077937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.248 [2024-05-15 12:40:10.077950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.248 [2024-05-15 12:40:10.077992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:01.248 [2024-05-15 12:40:10.078020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.248 [2024-05-15 12:40:10.078030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.248 [2024-05-15 12:40:10.078042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:01.248 [2024-05-15 12:40:10.078052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.248 [2024-05-15 12:40:10.078077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:01.248 [2024-05-15 12:40:10.078089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:01.248 [2024-05-15 12:40:10.078114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:01.248 [2024-05-15 12:40:10.078124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.248 [2024-05-15 12:40:10.078148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.248 [2024-05-15 12:40:10.078181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.248 [2024-05-15 12:40:10.078213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.248 [2024-05-15 12:40:10.078249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.248 [2024-05-15 12:40:10.078283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.248 [2024-05-15 12:40:10.078304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.248 [2024-05-15 12:40:10.078317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:01.248 [2024-05-15 12:40:10.078326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.248 [2024-05-15 12:40:10.078338] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.248 [2024-05-15 12:40:10.078349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.248 [2024-05-15 12:40:10.078361] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.248 [2024-05-15 12:40:10.078385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.248 [2024-05-15 12:40:10.078395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.248 [2024-05-15 12:40:10.078408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.248 [2024-05-15 12:40:10.078419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.248 [2024-05-15 12:40:10.078433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.248 [2024-05-15 12:40:10.078443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.248 [2024-05-15 12:40:10.078457] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.248 [2024-05-15 12:40:10.078472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.248 [2024-05-15 12:40:10.078486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:01.248 [2024-05-15 12:40:10.078498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:01.248 [2024-05-15 12:40:10.078511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:01.248 [2024-05-15 12:40:10.078522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:01.248 [2024-05-15 12:40:10.078906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:01.248 [2024-05-15 12:40:10.078991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:01.248 [2024-05-15 12:40:10.079164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:01.248 [2024-05-15 12:40:10.079228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:01.248 [2024-05-15 12:40:10.079451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:01.248 [2024-05-15 12:40:10.079673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:01.248 [2024-05-15 12:40:10.079742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:01.248 [2024-05-15 12:40:10.079909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:01.248 [2024-05-15 12:40:10.080004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:01.248 [2024-05-15 12:40:10.080064] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.248 [2024-05-15 12:40:10.080197] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.248 [2024-05-15 12:40:10.080259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.248 [2024-05-15 12:40:10.080316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.248 [2024-05-15 12:40:10.080439] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.248 [2024-05-15 12:40:10.080524] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.248 [2024-05-15 12:40:10.080587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.080631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.248 [2024-05-15 12:40:10.080650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:19:01.248 [2024-05-15 12:40:10.080665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.248 [2024-05-15 12:40:10.102793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.248 [2024-05-15 12:40:10.102857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.248 [2024-05-15 12:40:10.102876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.067 ms 00:19:01.249 [2024-05-15 12:40:10.102890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.102985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.103003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:01.249 [2024-05-15 12:40:10.103016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:01.249 [2024-05-15 12:40:10.103029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.168063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.168122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.249 [2024-05-15 12:40:10.168158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.971 ms 00:19:01.249 [2024-05-15 12:40:10.168173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.168236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.168256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.249 [2024-05-15 12:40:10.168269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.249 [2024-05-15 12:40:10.168283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.168952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.168983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.249 [2024-05-15 12:40:10.169004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:19:01.249 [2024-05-15 12:40:10.169019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.169168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.169199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.249 [2024-05-15 12:40:10.169213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:01.249 [2024-05-15 12:40:10.169226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.189714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.189778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.249 [2024-05-15 12:40:10.189798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.461 ms 00:19:01.249 [2024-05-15 12:40:10.189814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.249 [2024-05-15 12:40:10.204759] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:01.249 [2024-05-15 12:40:10.212677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.249 [2024-05-15 12:40:10.212716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:01.249 [2024-05-15 12:40:10.212738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.731 ms 00:19:01.249 [2024-05-15 12:40:10.212752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.507 [2024-05-15 12:40:10.285197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.507 [2024-05-15 12:40:10.285259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:01.507 [2024-05-15 12:40:10.285285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.383 ms 00:19:01.507 [2024-05-15 12:40:10.285303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.507 [2024-05-15 12:40:10.285366] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:01.507 [2024-05-15 12:40:10.285387] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:04.044 [2024-05-15 12:40:12.808477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.808561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:04.044 [2024-05-15 12:40:12.808588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2523.115 ms 00:19:04.044 [2024-05-15 12:40:12.808601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.808848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.808872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.044 [2024-05-15 12:40:12.808890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:04.044 [2024-05-15 12:40:12.808902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.839603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.839654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:04.044 [2024-05-15 12:40:12.839678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.631 ms 00:19:04.044 [2024-05-15 12:40:12.839691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.869541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.869587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:04.044 [2024-05-15 12:40:12.869613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.790 ms 00:19:04.044 [2024-05-15 12:40:12.869625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.870064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.870099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.044 [2024-05-15 12:40:12.870118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:19:04.044 [2024-05-15 12:40:12.870134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.947369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.947434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:04.044 [2024-05-15 12:40:12.947474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.165 ms 00:19:04.044 [2024-05-15 12:40:12.947489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.980401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.980454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:04.044 [2024-05-15 12:40:12.980493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.838 ms 00:19:04.044 [2024-05-15 12:40:12.980529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:12.982773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:12.982815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:04.044 [2024-05-15 12:40:12.982837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:19:04.044 [2024-05-15 12:40:12.982850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:13.013911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:13.013956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.044 [2024-05-15 12:40:13.013978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.993 ms 00:19:04.044 [2024-05-15 12:40:13.013991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:13.014048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:13.014073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.044 [2024-05-15 12:40:13.014090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:04.044 [2024-05-15 12:40:13.014102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:13.014228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.044 [2024-05-15 12:40:13.014247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.044 [2024-05-15 12:40:13.014263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:04.044 [2024-05-15 12:40:13.014276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.044 [2024-05-15 12:40:13.015559] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2957.850 ms, result 0 00:19:04.044 { 00:19:04.044 "name": "ftl0", 00:19:04.044 "uuid": "4f7c2f6b-a0f0-4f42-97ff-f0ad133c13b9" 00:19:04.044 } 00:19:04.044 12:40:13 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:04.044 12:40:13 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:19:04.044 12:40:13 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:19:04.301 12:40:13 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:04.558 [2024-05-15 12:40:13.423930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.558 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:04.558 Zero copy mechanism will not be used. 00:19:04.558 Running I/O for 4 seconds... 00:19:08.739 00:19:08.739 Latency(us) 00:19:08.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.739 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:08.739 ftl0 : 4.00 1786.92 118.66 0.00 0.00 588.19 242.04 1832.03 00:19:08.739 =================================================================================================================== 00:19:08.739 Total : 1786.92 118.66 0.00 0.00 588.19 242.04 1832.03 00:19:08.739 0 00:19:08.739 [2024-05-15 12:40:17.434947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.739 12:40:17 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:08.739 [2024-05-15 12:40:17.566870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.739 Running I/O for 4 seconds... 00:19:12.930 00:19:12.930 Latency(us) 00:19:12.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.930 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.930 ftl0 : 4.02 7844.51 30.64 0.00 0.00 16277.65 301.61 38130.04 00:19:12.930 =================================================================================================================== 00:19:12.930 Total : 7844.51 30.64 0.00 0.00 16277.65 0.00 38130.04 00:19:12.930 [2024-05-15 12:40:21.594739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.930 0 00:19:12.930 12:40:21 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:12.930 [2024-05-15 12:40:21.710725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.930 Running I/O for 4 seconds... 00:19:17.241 00:19:17.241 Latency(us) 00:19:17.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.241 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:17.241 Verification LBA range: start 0x0 length 0x1400000 00:19:17.241 ftl0 : 4.01 9572.64 37.39 0.00 0.00 13335.09 202.94 22639.71 00:19:17.241 =================================================================================================================== 00:19:17.241 Total : 9572.64 37.39 0.00 0.00 13335.09 0.00 22639.71 00:19:17.241 [2024-05-15 12:40:25.738499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:17.241 0 00:19:17.241 12:40:25 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:17.241 [2024-05-15 12:40:25.952226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:25.952292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:17.241 [2024-05-15 12:40:25.952340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.241 [2024-05-15 12:40:25.952353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:25.952391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.241 [2024-05-15 12:40:25.956016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:25.956056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:17.241 [2024-05-15 12:40:25.956089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:19:17.241 [2024-05-15 12:40:25.956109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:25.957838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:25.957920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:17.241 [2024-05-15 12:40:25.957953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:19:17.241 [2024-05-15 12:40:25.957968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.135362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.135437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:17.241 [2024-05-15 12:40:26.135462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.367 ms 00:19:17.241 [2024-05-15 12:40:26.135477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.142294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.142371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:17.241 [2024-05-15 12:40:26.142388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.746 ms 00:19:17.241 [2024-05-15 12:40:26.142402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.173501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.173598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:17.241 [2024-05-15 12:40:26.173617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.004 ms 00:19:17.241 [2024-05-15 12:40:26.173636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.191362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.191418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.241 [2024-05-15 12:40:26.191452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.680 ms 00:19:17.241 [2024-05-15 12:40:26.191465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.191691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.191718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.241 [2024-05-15 12:40:26.191735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:17.241 [2024-05-15 12:40:26.191748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.219715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.219788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:17.241 [2024-05-15 12:40:26.219807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.943 ms 00:19:17.241 [2024-05-15 12:40:26.219821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.241 [2024-05-15 12:40:26.250427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.241 [2024-05-15 12:40:26.250556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:17.241 [2024-05-15 12:40:26.250580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.558 ms 00:19:17.241 [2024-05-15 12:40:26.250597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.501 [2024-05-15 12:40:26.280212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.501 [2024-05-15 12:40:26.280275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.501 [2024-05-15 12:40:26.280295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.529 ms 00:19:17.501 [2024-05-15 12:40:26.280308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.501 [2024-05-15 12:40:26.309372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.501 [2024-05-15 12:40:26.309438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.501 [2024-05-15 12:40:26.309457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.959 ms 00:19:17.501 [2024-05-15 12:40:26.309470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.501 [2024-05-15 12:40:26.309555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.501 [2024-05-15 12:40:26.309586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.309990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.501 [2024-05-15 12:40:26.310211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.310986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.502 [2024-05-15 12:40:26.311025] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.502 [2024-05-15 12:40:26.311038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4f7c2f6b-a0f0-4f42-97ff-f0ad133c13b9 00:19:17.502 [2024-05-15 12:40:26.311055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.502 [2024-05-15 12:40:26.311066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.502 [2024-05-15 12:40:26.311079] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.502 [2024-05-15 12:40:26.311091] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.502 [2024-05-15 12:40:26.311104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.502 [2024-05-15 12:40:26.311116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.502 [2024-05-15 12:40:26.311129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.502 [2024-05-15 12:40:26.311139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.502 [2024-05-15 12:40:26.311152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.502 [2024-05-15 12:40:26.311164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.502 [2024-05-15 12:40:26.311181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.502 [2024-05-15 12:40:26.311193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:19:17.502 [2024-05-15 12:40:26.311207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.502 [2024-05-15 12:40:26.327961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.502 [2024-05-15 12:40:26.328055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.502 [2024-05-15 12:40:26.328073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:19:17.502 [2024-05-15 12:40:26.328090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.502 [2024-05-15 12:40:26.328347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.502 [2024-05-15 12:40:26.328367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.502 [2024-05-15 12:40:26.328379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:19:17.502 [2024-05-15 12:40:26.328392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.502 [2024-05-15 12:40:26.377876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.502 [2024-05-15 12:40:26.377942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.502 [2024-05-15 12:40:26.377961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.502 [2024-05-15 12:40:26.377977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.502 [2024-05-15 12:40:26.378064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.502 [2024-05-15 12:40:26.378084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.503 [2024-05-15 12:40:26.378097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.503 [2024-05-15 12:40:26.378111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.503 [2024-05-15 12:40:26.378219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.503 [2024-05-15 12:40:26.378245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.503 [2024-05-15 12:40:26.378259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.503 [2024-05-15 12:40:26.378276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.503 [2024-05-15 12:40:26.378300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.503 [2024-05-15 12:40:26.378320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.503 [2024-05-15 12:40:26.378333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.503 [2024-05-15 12:40:26.378346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.503 [2024-05-15 12:40:26.480092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.503 [2024-05-15 12:40:26.480191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.503 [2024-05-15 12:40:26.480212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.503 [2024-05-15 12:40:26.480227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.519935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.761 [2024-05-15 12:40:26.520087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.761 [2024-05-15 12:40:26.520241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.761 [2024-05-15 12:40:26.520350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.761 [2024-05-15 12:40:26.520588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.761 [2024-05-15 12:40:26.520724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.761 [2024-05-15 12:40:26.520824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.520896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.761 [2024-05-15 12:40:26.520917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.761 [2024-05-15 12:40:26.520933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.761 [2024-05-15 12:40:26.520946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.761 [2024-05-15 12:40:26.521108] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.838 ms, result 0 00:19:17.761 true 00:19:17.761 12:40:26 -- ftl/bdevperf.sh@37 -- # killprocess 73460 00:19:17.761 12:40:26 -- common/autotest_common.sh@926 -- # '[' -z 73460 ']' 00:19:17.761 12:40:26 -- common/autotest_common.sh@930 -- # kill -0 73460 00:19:17.761 12:40:26 -- common/autotest_common.sh@931 -- # uname 00:19:17.761 12:40:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.761 12:40:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73460 00:19:17.761 12:40:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:17.761 12:40:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:17.761 12:40:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73460' 00:19:17.761 killing process with pid 73460 00:19:17.761 12:40:26 -- common/autotest_common.sh@945 -- # kill 73460 00:19:17.761 Received shutdown signal, test time was about 4.000000 seconds 00:19:17.761 00:19:17.761 Latency(us) 00:19:17.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.761 =================================================================================================================== 00:19:17.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.761 12:40:26 -- common/autotest_common.sh@950 -- # wait 73460 00:19:19.136 12:40:27 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:19.136 12:40:27 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:19.136 12:40:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:19.136 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.136 Remove shared memory files 00:19:19.136 12:40:27 -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:19.136 12:40:27 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:19.136 12:40:27 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:19.136 12:40:27 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:19.136 12:40:27 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:19.136 12:40:27 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:19.136 12:40:27 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:19.136 ************************************ 00:19:19.136 END TEST ftl_bdevperf 00:19:19.136 ************************************ 00:19:19.136 00:19:19.136 real 0m22.413s 00:19:19.136 user 0m25.645s 00:19:19.136 sys 0m1.184s 00:19:19.136 12:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.136 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.136 12:40:27 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:19.136 12:40:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:19.136 12:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.136 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.136 ************************************ 00:19:19.136 START TEST ftl_trim 00:19:19.136 ************************************ 00:19:19.136 12:40:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:19.136 * Looking for test storage... 00:19:19.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.136 12:40:27 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:19.136 12:40:27 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:19.136 12:40:27 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.136 12:40:27 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.136 12:40:27 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:19.136 12:40:27 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:19.136 12:40:27 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.136 12:40:27 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:19.136 12:40:27 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:19.136 12:40:27 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.136 12:40:27 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.136 12:40:27 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:19.136 12:40:27 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:19.136 12:40:27 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.136 12:40:27 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.136 12:40:27 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:19.136 12:40:27 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:19.136 12:40:27 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.137 12:40:27 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.137 12:40:27 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:19.137 12:40:27 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:19.137 12:40:27 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.137 12:40:27 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.137 12:40:27 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.137 12:40:27 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.137 12:40:27 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:19.137 12:40:27 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:19.137 12:40:27 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.137 12:40:27 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.137 12:40:27 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.137 12:40:27 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:19:19.137 12:40:27 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:19:19.137 12:40:27 -- ftl/trim.sh@25 -- # timeout=240 00:19:19.137 12:40:27 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:19.137 12:40:27 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:19.137 12:40:27 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:19.137 12:40:27 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:19.137 12:40:27 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:19.137 12:40:27 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.137 12:40:27 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.137 12:40:27 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:19.137 12:40:27 -- ftl/trim.sh@40 -- # svcpid=73814 00:19:19.137 12:40:27 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:19.137 12:40:27 -- ftl/trim.sh@41 -- # waitforlisten 73814 00:19:19.137 12:40:27 -- common/autotest_common.sh@819 -- # '[' -z 73814 ']' 00:19:19.137 12:40:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.137 12:40:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.137 12:40:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.137 12:40:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.137 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 [2024-05-15 12:40:28.109766] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:19.137 [2024-05-15 12:40:28.109927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73814 ] 00:19:19.396 [2024-05-15 12:40:28.283695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.654 [2024-05-15 12:40:28.563694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.654 [2024-05-15 12:40:28.564045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.654 [2024-05-15 12:40:28.564336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.654 [2024-05-15 12:40:28.564351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.030 12:40:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.030 12:40:29 -- common/autotest_common.sh@852 -- # return 0 00:19:21.030 12:40:29 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:21.030 12:40:29 -- ftl/common.sh@54 -- # local name=nvme0 00:19:21.030 12:40:29 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:21.030 12:40:29 -- ftl/common.sh@56 -- # local size=103424 00:19:21.030 12:40:29 -- ftl/common.sh@59 -- # local base_bdev 00:19:21.030 12:40:29 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:21.289 12:40:30 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:21.289 12:40:30 -- ftl/common.sh@62 -- # local base_size 00:19:21.289 12:40:30 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:21.289 12:40:30 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:19:21.289 12:40:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:21.289 12:40:30 -- common/autotest_common.sh@1359 -- # local bs 00:19:21.289 12:40:30 -- common/autotest_common.sh@1360 -- # local nb 00:19:21.289 12:40:30 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:21.546 12:40:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:21.546 { 00:19:21.546 "name": "nvme0n1", 00:19:21.546 "aliases": [ 00:19:21.546 "b96ca7a5-8256-41cb-beed-7cd23accabae" 00:19:21.546 ], 00:19:21.546 "product_name": "NVMe disk", 00:19:21.546 "block_size": 4096, 00:19:21.546 "num_blocks": 1310720, 00:19:21.546 "uuid": "b96ca7a5-8256-41cb-beed-7cd23accabae", 00:19:21.546 "assigned_rate_limits": { 00:19:21.546 "rw_ios_per_sec": 0, 00:19:21.546 "rw_mbytes_per_sec": 0, 00:19:21.546 "r_mbytes_per_sec": 0, 00:19:21.546 "w_mbytes_per_sec": 0 00:19:21.546 }, 00:19:21.546 "claimed": true, 00:19:21.546 "claim_type": "read_many_write_one", 00:19:21.546 "zoned": false, 00:19:21.546 "supported_io_types": { 00:19:21.546 "read": true, 00:19:21.546 "write": true, 00:19:21.546 "unmap": true, 00:19:21.546 "write_zeroes": true, 00:19:21.546 "flush": true, 00:19:21.546 "reset": true, 00:19:21.546 "compare": true, 00:19:21.546 "compare_and_write": false, 00:19:21.546 "abort": true, 00:19:21.546 "nvme_admin": true, 00:19:21.546 "nvme_io": true 00:19:21.546 }, 00:19:21.546 "driver_specific": { 00:19:21.546 "nvme": [ 00:19:21.546 { 00:19:21.546 "pci_address": "0000:00:07.0", 00:19:21.546 "trid": { 00:19:21.546 "trtype": "PCIe", 00:19:21.546 "traddr": "0000:00:07.0" 00:19:21.546 }, 00:19:21.546 "ctrlr_data": { 00:19:21.546 "cntlid": 0, 00:19:21.546 "vendor_id": "0x1b36", 00:19:21.546 "model_number": "QEMU NVMe Ctrl", 00:19:21.546 "serial_number": "12341", 00:19:21.546 "firmware_revision": "8.0.0", 00:19:21.546 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:21.546 "oacs": { 00:19:21.546 "security": 0, 00:19:21.546 "format": 1, 00:19:21.546 "firmware": 0, 00:19:21.546 "ns_manage": 1 00:19:21.546 }, 00:19:21.546 "multi_ctrlr": false, 00:19:21.546 "ana_reporting": false 00:19:21.546 }, 00:19:21.546 "vs": { 00:19:21.546 "nvme_version": "1.4" 00:19:21.546 }, 00:19:21.546 "ns_data": { 00:19:21.546 "id": 1, 00:19:21.546 "can_share": false 00:19:21.546 } 00:19:21.546 } 00:19:21.546 ], 00:19:21.546 "mp_policy": "active_passive" 00:19:21.546 } 00:19:21.546 } 00:19:21.546 ]' 00:19:21.546 12:40:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:21.546 12:40:30 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:21.546 12:40:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:21.546 12:40:30 -- common/autotest_common.sh@1363 -- # nb=1310720 00:19:21.546 12:40:30 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:19:21.546 12:40:30 -- common/autotest_common.sh@1367 -- # echo 5120 00:19:21.546 12:40:30 -- ftl/common.sh@63 -- # base_size=5120 00:19:21.546 12:40:30 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:21.546 12:40:30 -- ftl/common.sh@67 -- # clear_lvols 00:19:21.546 12:40:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:21.546 12:40:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:21.803 12:40:30 -- ftl/common.sh@28 -- # stores=bfe1ec43-d7ee-410a-8565-61354db8e1c0 00:19:21.803 12:40:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:21.803 12:40:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfe1ec43-d7ee-410a-8565-61354db8e1c0 00:19:22.061 12:40:31 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:22.320 12:40:31 -- ftl/common.sh@68 -- # lvs=ae63c405-74d5-4a4d-bf8c-9157418ed66b 00:19:22.320 12:40:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ae63c405-74d5-4a4d-bf8c-9157418ed66b 00:19:22.578 12:40:31 -- ftl/trim.sh@43 -- # split_bdev=57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.578 12:40:31 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.578 12:40:31 -- ftl/common.sh@35 -- # local name=nvc0 00:19:22.578 12:40:31 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:22.578 12:40:31 -- ftl/common.sh@37 -- # local base_bdev=57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.578 12:40:31 -- ftl/common.sh@38 -- # local cache_size= 00:19:22.578 12:40:31 -- ftl/common.sh@41 -- # get_bdev_size 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.578 12:40:31 -- common/autotest_common.sh@1357 -- # local bdev_name=57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.578 12:40:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:22.578 12:40:31 -- common/autotest_common.sh@1359 -- # local bs 00:19:22.578 12:40:31 -- common/autotest_common.sh@1360 -- # local nb 00:19:22.578 12:40:31 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:22.837 12:40:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:22.837 { 00:19:22.837 "name": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:22.837 "aliases": [ 00:19:22.837 "lvs/nvme0n1p0" 00:19:22.837 ], 00:19:22.837 "product_name": "Logical Volume", 00:19:22.837 "block_size": 4096, 00:19:22.837 "num_blocks": 26476544, 00:19:22.837 "uuid": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:22.837 "assigned_rate_limits": { 00:19:22.837 "rw_ios_per_sec": 0, 00:19:22.837 "rw_mbytes_per_sec": 0, 00:19:22.837 "r_mbytes_per_sec": 0, 00:19:22.837 "w_mbytes_per_sec": 0 00:19:22.837 }, 00:19:22.837 "claimed": false, 00:19:22.837 "zoned": false, 00:19:22.837 "supported_io_types": { 00:19:22.837 "read": true, 00:19:22.837 "write": true, 00:19:22.837 "unmap": true, 00:19:22.837 "write_zeroes": true, 00:19:22.837 "flush": false, 00:19:22.837 "reset": true, 00:19:22.837 "compare": false, 00:19:22.837 "compare_and_write": false, 00:19:22.837 "abort": false, 00:19:22.837 "nvme_admin": false, 00:19:22.837 "nvme_io": false 00:19:22.837 }, 00:19:22.837 "driver_specific": { 00:19:22.837 "lvol": { 00:19:22.837 "lvol_store_uuid": "ae63c405-74d5-4a4d-bf8c-9157418ed66b", 00:19:22.837 "base_bdev": "nvme0n1", 00:19:22.837 "thin_provision": true, 00:19:22.837 "snapshot": false, 00:19:22.837 "clone": false, 00:19:22.837 "esnap_clone": false 00:19:22.837 } 00:19:22.837 } 00:19:22.837 } 00:19:22.837 ]' 00:19:22.837 12:40:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:22.837 12:40:31 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:22.837 12:40:31 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:23.096 12:40:31 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:23.096 12:40:31 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:23.096 12:40:31 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:23.096 12:40:31 -- ftl/common.sh@41 -- # local base_size=5171 00:19:23.096 12:40:31 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:23.096 12:40:31 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:23.359 12:40:32 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:23.360 12:40:32 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:23.360 12:40:32 -- ftl/common.sh@48 -- # get_bdev_size 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:23.360 12:40:32 -- common/autotest_common.sh@1357 -- # local bdev_name=57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:23.360 12:40:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:23.360 12:40:32 -- common/autotest_common.sh@1359 -- # local bs 00:19:23.360 12:40:32 -- common/autotest_common.sh@1360 -- # local nb 00:19:23.360 12:40:32 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:23.619 12:40:32 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:23.619 { 00:19:23.619 "name": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:23.619 "aliases": [ 00:19:23.619 "lvs/nvme0n1p0" 00:19:23.619 ], 00:19:23.619 "product_name": "Logical Volume", 00:19:23.619 "block_size": 4096, 00:19:23.619 "num_blocks": 26476544, 00:19:23.619 "uuid": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:23.619 "assigned_rate_limits": { 00:19:23.619 "rw_ios_per_sec": 0, 00:19:23.619 "rw_mbytes_per_sec": 0, 00:19:23.619 "r_mbytes_per_sec": 0, 00:19:23.619 "w_mbytes_per_sec": 0 00:19:23.619 }, 00:19:23.619 "claimed": false, 00:19:23.619 "zoned": false, 00:19:23.619 "supported_io_types": { 00:19:23.619 "read": true, 00:19:23.619 "write": true, 00:19:23.619 "unmap": true, 00:19:23.619 "write_zeroes": true, 00:19:23.619 "flush": false, 00:19:23.619 "reset": true, 00:19:23.619 "compare": false, 00:19:23.619 "compare_and_write": false, 00:19:23.619 "abort": false, 00:19:23.619 "nvme_admin": false, 00:19:23.619 "nvme_io": false 00:19:23.619 }, 00:19:23.619 "driver_specific": { 00:19:23.619 "lvol": { 00:19:23.619 "lvol_store_uuid": "ae63c405-74d5-4a4d-bf8c-9157418ed66b", 00:19:23.619 "base_bdev": "nvme0n1", 00:19:23.619 "thin_provision": true, 00:19:23.619 "snapshot": false, 00:19:23.619 "clone": false, 00:19:23.619 "esnap_clone": false 00:19:23.619 } 00:19:23.619 } 00:19:23.619 } 00:19:23.619 ]' 00:19:23.619 12:40:32 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:23.619 12:40:32 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:23.619 12:40:32 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:23.619 12:40:32 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:23.619 12:40:32 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:23.619 12:40:32 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:23.619 12:40:32 -- ftl/common.sh@48 -- # cache_size=5171 00:19:23.619 12:40:32 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:23.878 12:40:32 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:23.878 12:40:32 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:23.878 12:40:32 -- ftl/trim.sh@47 -- # get_bdev_size 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:23.878 12:40:32 -- common/autotest_common.sh@1357 -- # local bdev_name=57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:23.878 12:40:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:23.878 12:40:32 -- common/autotest_common.sh@1359 -- # local bs 00:19:23.878 12:40:32 -- common/autotest_common.sh@1360 -- # local nb 00:19:23.878 12:40:32 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57af7d92-23ab-4b8c-872e-23cc612d8e27 00:19:24.136 12:40:32 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:24.136 { 00:19:24.136 "name": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:24.136 "aliases": [ 00:19:24.136 "lvs/nvme0n1p0" 00:19:24.136 ], 00:19:24.136 "product_name": "Logical Volume", 00:19:24.136 "block_size": 4096, 00:19:24.136 "num_blocks": 26476544, 00:19:24.136 "uuid": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:24.136 "assigned_rate_limits": { 00:19:24.136 "rw_ios_per_sec": 0, 00:19:24.136 "rw_mbytes_per_sec": 0, 00:19:24.136 "r_mbytes_per_sec": 0, 00:19:24.136 "w_mbytes_per_sec": 0 00:19:24.136 }, 00:19:24.136 "claimed": false, 00:19:24.136 "zoned": false, 00:19:24.136 "supported_io_types": { 00:19:24.136 "read": true, 00:19:24.136 "write": true, 00:19:24.136 "unmap": true, 00:19:24.136 "write_zeroes": true, 00:19:24.136 "flush": false, 00:19:24.136 "reset": true, 00:19:24.136 "compare": false, 00:19:24.136 "compare_and_write": false, 00:19:24.136 "abort": false, 00:19:24.136 "nvme_admin": false, 00:19:24.136 "nvme_io": false 00:19:24.136 }, 00:19:24.136 "driver_specific": { 00:19:24.136 "lvol": { 00:19:24.136 "lvol_store_uuid": "ae63c405-74d5-4a4d-bf8c-9157418ed66b", 00:19:24.136 "base_bdev": "nvme0n1", 00:19:24.136 "thin_provision": true, 00:19:24.136 "snapshot": false, 00:19:24.136 "clone": false, 00:19:24.136 "esnap_clone": false 00:19:24.136 } 00:19:24.136 } 00:19:24.136 } 00:19:24.136 ]' 00:19:24.136 12:40:32 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:24.136 12:40:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:24.136 12:40:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:24.136 12:40:33 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:24.136 12:40:33 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:24.136 12:40:33 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:24.136 12:40:33 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:24.136 12:40:33 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 57af7d92-23ab-4b8c-872e-23cc612d8e27 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:24.394 [2024-05-15 12:40:33.359910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.360007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.394 [2024-05-15 12:40:33.360050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:24.394 [2024-05-15 12:40:33.360068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.368674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.368780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.394 [2024-05-15 12:40:33.368824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.544 ms 00:19:24.394 [2024-05-15 12:40:33.368851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.369284] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.394 [2024-05-15 12:40:33.370651] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.394 [2024-05-15 12:40:33.370701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.370718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.394 [2024-05-15 12:40:33.370734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:19:24.394 [2024-05-15 12:40:33.370747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.370972] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3c28e999-af9a-4c95-b334-96e908a03298 00:19:24.394 [2024-05-15 12:40:33.372895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.372941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:24.394 [2024-05-15 12:40:33.372960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:24.394 [2024-05-15 12:40:33.372975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.383099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.383155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.394 [2024-05-15 12:40:33.383177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.967 ms 00:19:24.394 [2024-05-15 12:40:33.383192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.383409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.383437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.394 [2024-05-15 12:40:33.383452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:24.394 [2024-05-15 12:40:33.383472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.383569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.383600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.394 [2024-05-15 12:40:33.383615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:24.394 [2024-05-15 12:40:33.383633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.383695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.394 [2024-05-15 12:40:33.388953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.388999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.394 [2024-05-15 12:40:33.389020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:19:24.394 [2024-05-15 12:40:33.389043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.389131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.389149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.394 [2024-05-15 12:40:33.389165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.394 [2024-05-15 12:40:33.389177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.389231] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:24.394 [2024-05-15 12:40:33.389396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:24.394 [2024-05-15 12:40:33.389422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.394 [2024-05-15 12:40:33.389438] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:24.394 [2024-05-15 12:40:33.389457] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.394 [2024-05-15 12:40:33.389472] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.394 [2024-05-15 12:40:33.389487] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.394 [2024-05-15 12:40:33.389536] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.394 [2024-05-15 12:40:33.389566] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:24.394 [2024-05-15 12:40:33.389582] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:24.394 [2024-05-15 12:40:33.389597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.389610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.394 [2024-05-15 12:40:33.389624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:24.394 [2024-05-15 12:40:33.389637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.389741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.389756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.394 [2024-05-15 12:40:33.389771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:24.394 [2024-05-15 12:40:33.389783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.394 [2024-05-15 12:40:33.389935] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.394 [2024-05-15 12:40:33.389952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.394 [2024-05-15 12:40:33.389966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.394 [2024-05-15 12:40:33.389979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.394 [2024-05-15 12:40:33.389995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.394 [2024-05-15 12:40:33.390006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.394 [2024-05-15 12:40:33.390049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.394 [2024-05-15 12:40:33.390073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.394 [2024-05-15 12:40:33.390086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.394 [2024-05-15 12:40:33.390101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.394 [2024-05-15 12:40:33.390113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.394 [2024-05-15 12:40:33.390126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:24.394 [2024-05-15 12:40:33.390137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.394 [2024-05-15 12:40:33.390164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:24.394 [2024-05-15 12:40:33.390178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:24.394 [2024-05-15 12:40:33.390202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:24.394 [2024-05-15 12:40:33.390214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.394 [2024-05-15 12:40:33.390241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.394 [2024-05-15 12:40:33.390279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.394 [2024-05-15 12:40:33.390315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.394 [2024-05-15 12:40:33.390355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.394 [2024-05-15 12:40:33.390391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.394 [2024-05-15 12:40:33.390415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.394 [2024-05-15 12:40:33.390430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:24.394 [2024-05-15 12:40:33.390442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.394 [2024-05-15 12:40:33.390455] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.394 [2024-05-15 12:40:33.390468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.394 [2024-05-15 12:40:33.390482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.394 [2024-05-15 12:40:33.390528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.394 [2024-05-15 12:40:33.390540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.394 [2024-05-15 12:40:33.390554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.394 [2024-05-15 12:40:33.390567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.394 [2024-05-15 12:40:33.390583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.394 [2024-05-15 12:40:33.390595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.394 [2024-05-15 12:40:33.390611] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.394 [2024-05-15 12:40:33.390627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.394 [2024-05-15 12:40:33.390643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.394 [2024-05-15 12:40:33.390656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:24.394 [2024-05-15 12:40:33.390671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.394 [2024-05-15 12:40:33.390684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:24.394 [2024-05-15 12:40:33.390699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:24.394 [2024-05-15 12:40:33.390711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:24.394 [2024-05-15 12:40:33.390726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:24.394 [2024-05-15 12:40:33.390739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:24.394 [2024-05-15 12:40:33.390753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:24.394 [2024-05-15 12:40:33.390772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:24.394 [2024-05-15 12:40:33.390786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:24.394 [2024-05-15 12:40:33.390800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:24.394 [2024-05-15 12:40:33.390820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:24.394 [2024-05-15 12:40:33.390832] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.394 [2024-05-15 12:40:33.390853] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.394 [2024-05-15 12:40:33.390866] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.394 [2024-05-15 12:40:33.390881] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.394 [2024-05-15 12:40:33.390894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.394 [2024-05-15 12:40:33.390909] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.394 [2024-05-15 12:40:33.390922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.394 [2024-05-15 12:40:33.390938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.394 [2024-05-15 12:40:33.390950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:19:24.394 [2024-05-15 12:40:33.390965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.414044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.414273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.652 [2024-05-15 12:40:33.414410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:19:24.652 [2024-05-15 12:40:33.414472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.414722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.414810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.652 [2024-05-15 12:40:33.414933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:24.652 [2024-05-15 12:40:33.414992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.461679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.461958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.652 [2024-05-15 12:40:33.462092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.513 ms 00:19:24.652 [2024-05-15 12:40:33.462230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.462419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.462523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.652 [2024-05-15 12:40:33.462710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.652 [2024-05-15 12:40:33.462774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.463525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.463676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.652 [2024-05-15 12:40:33.463801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:19:24.652 [2024-05-15 12:40:33.463857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.464133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.464275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.652 [2024-05-15 12:40:33.464388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:24.652 [2024-05-15 12:40:33.464445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.502384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.502639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.652 [2024-05-15 12:40:33.502777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.745 ms 00:19:24.652 [2024-05-15 12:40:33.502894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.517657] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:24.652 [2024-05-15 12:40:33.539749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.539828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.652 [2024-05-15 12:40:33.539855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.543 ms 00:19:24.652 [2024-05-15 12:40:33.539869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.621269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.652 [2024-05-15 12:40:33.621353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:24.652 [2024-05-15 12:40:33.621379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.240 ms 00:19:24.652 [2024-05-15 12:40:33.621393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.652 [2024-05-15 12:40:33.621571] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:24.652 [2024-05-15 12:40:33.621597] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:27.179 [2024-05-15 12:40:36.033891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.179 [2024-05-15 12:40:36.033995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:27.179 [2024-05-15 12:40:36.034036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2412.322 ms 00:19:27.179 [2024-05-15 12:40:36.034053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.179 [2024-05-15 12:40:36.034377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.179 [2024-05-15 12:40:36.034401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.179 [2024-05-15 12:40:36.034422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:27.179 [2024-05-15 12:40:36.034438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.179 [2024-05-15 12:40:36.065905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.179 [2024-05-15 12:40:36.065960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:27.179 [2024-05-15 12:40:36.065988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.397 ms 00:19:27.179 [2024-05-15 12:40:36.066005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.179 [2024-05-15 12:40:36.096483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.179 [2024-05-15 12:40:36.096543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:27.179 [2024-05-15 12:40:36.096575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.352 ms 00:19:27.179 [2024-05-15 12:40:36.096591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.179 [2024-05-15 12:40:36.097090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.179 [2024-05-15 12:40:36.097121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.179 [2024-05-15 12:40:36.097143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:19:27.180 [2024-05-15 12:40:36.097159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.180 [2024-05-15 12:40:36.178036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.180 [2024-05-15 12:40:36.178114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:27.180 [2024-05-15 12:40:36.178146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.811 ms 00:19:27.180 [2024-05-15 12:40:36.178163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.211874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.438 [2024-05-15 12:40:36.211935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:27.438 [2024-05-15 12:40:36.211982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.566 ms 00:19:27.438 [2024-05-15 12:40:36.212017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.217138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.438 [2024-05-15 12:40:36.217187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:27.438 [2024-05-15 12:40:36.217233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:19:27.438 [2024-05-15 12:40:36.217248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.248696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.438 [2024-05-15 12:40:36.248746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.438 [2024-05-15 12:40:36.248772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.365 ms 00:19:27.438 [2024-05-15 12:40:36.248788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.248916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.438 [2024-05-15 12:40:36.248939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.438 [2024-05-15 12:40:36.248959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:27.438 [2024-05-15 12:40:36.248974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.249104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.438 [2024-05-15 12:40:36.249124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.438 [2024-05-15 12:40:36.249147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:27.438 [2024-05-15 12:40:36.249162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.438 [2024-05-15 12:40:36.250469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.438 [2024-05-15 12:40:36.254646] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2890.246 ms, result 0 00:19:27.438 [2024-05-15 12:40:36.255709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.438 { 00:19:27.438 "name": "ftl0", 00:19:27.438 "uuid": "3c28e999-af9a-4c95-b334-96e908a03298" 00:19:27.438 } 00:19:27.438 12:40:36 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:27.438 12:40:36 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:19:27.438 12:40:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:27.438 12:40:36 -- common/autotest_common.sh@889 -- # local i 00:19:27.438 12:40:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:27.438 12:40:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:27.438 12:40:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.697 12:40:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.956 [ 00:19:27.956 { 00:19:27.956 "name": "ftl0", 00:19:27.956 "aliases": [ 00:19:27.956 "3c28e999-af9a-4c95-b334-96e908a03298" 00:19:27.956 ], 00:19:27.956 "product_name": "FTL disk", 00:19:27.956 "block_size": 4096, 00:19:27.956 "num_blocks": 23592960, 00:19:27.956 "uuid": "3c28e999-af9a-4c95-b334-96e908a03298", 00:19:27.956 "assigned_rate_limits": { 00:19:27.956 "rw_ios_per_sec": 0, 00:19:27.956 "rw_mbytes_per_sec": 0, 00:19:27.956 "r_mbytes_per_sec": 0, 00:19:27.956 "w_mbytes_per_sec": 0 00:19:27.956 }, 00:19:27.956 "claimed": false, 00:19:27.956 "zoned": false, 00:19:27.956 "supported_io_types": { 00:19:27.956 "read": true, 00:19:27.956 "write": true, 00:19:27.956 "unmap": true, 00:19:27.956 "write_zeroes": true, 00:19:27.956 "flush": true, 00:19:27.956 "reset": false, 00:19:27.956 "compare": false, 00:19:27.956 "compare_and_write": false, 00:19:27.956 "abort": false, 00:19:27.956 "nvme_admin": false, 00:19:27.956 "nvme_io": false 00:19:27.956 }, 00:19:27.956 "driver_specific": { 00:19:27.956 "ftl": { 00:19:27.956 "base_bdev": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:27.956 "cache": "nvc0n1p0" 00:19:27.956 } 00:19:27.956 } 00:19:27.956 } 00:19:27.956 ] 00:19:27.956 12:40:36 -- common/autotest_common.sh@895 -- # return 0 00:19:27.956 12:40:36 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.956 12:40:36 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:28.215 12:40:37 -- ftl/trim.sh@56 -- # echo ']}' 00:19:28.215 12:40:37 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:28.473 12:40:37 -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:28.473 { 00:19:28.473 "name": "ftl0", 00:19:28.473 "aliases": [ 00:19:28.473 "3c28e999-af9a-4c95-b334-96e908a03298" 00:19:28.473 ], 00:19:28.473 "product_name": "FTL disk", 00:19:28.473 "block_size": 4096, 00:19:28.473 "num_blocks": 23592960, 00:19:28.473 "uuid": "3c28e999-af9a-4c95-b334-96e908a03298", 00:19:28.473 "assigned_rate_limits": { 00:19:28.473 "rw_ios_per_sec": 0, 00:19:28.473 "rw_mbytes_per_sec": 0, 00:19:28.473 "r_mbytes_per_sec": 0, 00:19:28.473 "w_mbytes_per_sec": 0 00:19:28.473 }, 00:19:28.473 "claimed": false, 00:19:28.473 "zoned": false, 00:19:28.473 "supported_io_types": { 00:19:28.473 "read": true, 00:19:28.473 "write": true, 00:19:28.473 "unmap": true, 00:19:28.473 "write_zeroes": true, 00:19:28.473 "flush": true, 00:19:28.473 "reset": false, 00:19:28.473 "compare": false, 00:19:28.473 "compare_and_write": false, 00:19:28.473 "abort": false, 00:19:28.473 "nvme_admin": false, 00:19:28.473 "nvme_io": false 00:19:28.473 }, 00:19:28.473 "driver_specific": { 00:19:28.473 "ftl": { 00:19:28.473 "base_bdev": "57af7d92-23ab-4b8c-872e-23cc612d8e27", 00:19:28.473 "cache": "nvc0n1p0" 00:19:28.473 } 00:19:28.473 } 00:19:28.473 } 00:19:28.473 ]' 00:19:28.473 12:40:37 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:28.473 12:40:37 -- ftl/trim.sh@60 -- # nb=23592960 00:19:28.473 12:40:37 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:28.732 [2024-05-15 12:40:37.577100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.577178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.732 [2024-05-15 12:40:37.577207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.732 [2024-05-15 12:40:37.577227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.577296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.732 [2024-05-15 12:40:37.581130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.581190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.732 [2024-05-15 12:40:37.581218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.797 ms 00:19:28.732 [2024-05-15 12:40:37.581234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.582175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.582221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.732 [2024-05-15 12:40:37.582248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:19:28.732 [2024-05-15 12:40:37.582265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.585999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.586056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.732 [2024-05-15 12:40:37.586103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:19:28.732 [2024-05-15 12:40:37.586118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.593547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.593591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:28.732 [2024-05-15 12:40:37.593620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.351 ms 00:19:28.732 [2024-05-15 12:40:37.593635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.625681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.625735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.732 [2024-05-15 12:40:37.625763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.879 ms 00:19:28.732 [2024-05-15 12:40:37.625778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.645774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.645879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.732 [2024-05-15 12:40:37.645928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.876 ms 00:19:28.732 [2024-05-15 12:40:37.645946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.646329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.646360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.732 [2024-05-15 12:40:37.646386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:19:28.732 [2024-05-15 12:40:37.646402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.678492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.678625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.732 [2024-05-15 12:40:37.678655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.022 ms 00:19:28.732 [2024-05-15 12:40:37.678672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.708179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.708226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.732 [2024-05-15 12:40:37.708269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.359 ms 00:19:28.732 [2024-05-15 12:40:37.708284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.732 [2024-05-15 12:40:37.738354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.732 [2024-05-15 12:40:37.738425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.732 [2024-05-15 12:40:37.738472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.958 ms 00:19:28.732 [2024-05-15 12:40:37.738488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.997 [2024-05-15 12:40:37.768976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.997 [2024-05-15 12:40:37.769046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.997 [2024-05-15 12:40:37.769096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.286 ms 00:19:28.997 [2024-05-15 12:40:37.769112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.997 [2024-05-15 12:40:37.769229] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.997 [2024-05-15 12:40:37.769260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.769975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.997 [2024-05-15 12:40:37.770480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.770995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.998 [2024-05-15 12:40:37.771292] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.998 [2024-05-15 12:40:37.771315] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:19:28.998 [2024-05-15 12:40:37.771331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.998 [2024-05-15 12:40:37.771349] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.998 [2024-05-15 12:40:37.771363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.998 [2024-05-15 12:40:37.771380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.998 [2024-05-15 12:40:37.771394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.998 [2024-05-15 12:40:37.771420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.998 [2024-05-15 12:40:37.771443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.998 [2024-05-15 12:40:37.771461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.998 [2024-05-15 12:40:37.771481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.998 [2024-05-15 12:40:37.771511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.998 [2024-05-15 12:40:37.771529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.998 [2024-05-15 12:40:37.771548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:19:28.998 [2024-05-15 12:40:37.771567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.788899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.998 [2024-05-15 12:40:37.789093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.998 [2024-05-15 12:40:37.789253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.258 ms 00:19:28.998 [2024-05-15 12:40:37.789440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.789889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.998 [2024-05-15 12:40:37.790082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.998 [2024-05-15 12:40:37.790263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:28.998 [2024-05-15 12:40:37.790331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.850857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.998 [2024-05-15 12:40:37.851276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.998 [2024-05-15 12:40:37.851451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.998 [2024-05-15 12:40:37.851618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.851959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.998 [2024-05-15 12:40:37.852109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.998 [2024-05-15 12:40:37.852272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.998 [2024-05-15 12:40:37.852419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.852682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.998 [2024-05-15 12:40:37.852834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.998 [2024-05-15 12:40:37.852985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.998 [2024-05-15 12:40:37.853132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.853243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.998 [2024-05-15 12:40:37.853332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.998 [2024-05-15 12:40:37.853484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.998 [2024-05-15 12:40:37.853708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.998 [2024-05-15 12:40:37.975924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.998 [2024-05-15 12:40:37.976280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.998 [2024-05-15 12:40:37.976457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.998 [2024-05-15 12:40:37.976659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.290 [2024-05-15 12:40:38.016681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.016938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.291 [2024-05-15 12:40:38.017091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.017278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.017454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.017585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.291 [2024-05-15 12:40:38.017739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.017883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.018122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.018270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.291 [2024-05-15 12:40:38.018431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.018513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.018822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.018917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.291 [2024-05-15 12:40:38.019070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.019216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.019467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.019631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.291 [2024-05-15 12:40:38.019784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.019957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.020105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.020184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.291 [2024-05-15 12:40:38.020330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.020475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.020732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.291 [2024-05-15 12:40:38.020768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.291 [2024-05-15 12:40:38.020792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.291 [2024-05-15 12:40:38.020807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.291 [2024-05-15 12:40:38.021065] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.928 ms, result 0 00:19:29.291 true 00:19:29.291 12:40:38 -- ftl/trim.sh@63 -- # killprocess 73814 00:19:29.291 12:40:38 -- common/autotest_common.sh@926 -- # '[' -z 73814 ']' 00:19:29.291 12:40:38 -- common/autotest_common.sh@930 -- # kill -0 73814 00:19:29.291 12:40:38 -- common/autotest_common.sh@931 -- # uname 00:19:29.291 12:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:29.291 12:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73814 00:19:29.291 12:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:29.291 12:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:29.291 12:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73814' 00:19:29.291 killing process with pid 73814 00:19:29.291 12:40:38 -- common/autotest_common.sh@945 -- # kill 73814 00:19:29.291 12:40:38 -- common/autotest_common.sh@950 -- # wait 73814 00:19:34.558 12:40:43 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:35.498 65536+0 records in 00:19:35.498 65536+0 records out 00:19:35.498 268435456 bytes (268 MB, 256 MiB) copied, 1.15215 s, 233 MB/s 00:19:35.498 12:40:44 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:35.498 [2024-05-15 12:40:44.488640] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:35.498 [2024-05-15 12:40:44.488810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:19:35.756 [2024-05-15 12:40:44.664768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.014 [2024-05-15 12:40:44.933582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.273 [2024-05-15 12:40:45.280877] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.273 [2024-05-15 12:40:45.280998] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.532 [2024-05-15 12:40:45.439872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.532 [2024-05-15 12:40:45.439936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.532 [2024-05-15 12:40:45.439974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:36.532 [2024-05-15 12:40:45.439991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.532 [2024-05-15 12:40:45.443438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.443484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.533 [2024-05-15 12:40:45.443535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.419 ms 00:19:36.533 [2024-05-15 12:40:45.443548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.443679] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.533 [2024-05-15 12:40:45.444700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.533 [2024-05-15 12:40:45.444746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.444771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.533 [2024-05-15 12:40:45.444783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:36.533 [2024-05-15 12:40:45.444803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.446823] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.533 [2024-05-15 12:40:45.463540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.463583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.533 [2024-05-15 12:40:45.463618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.718 ms 00:19:36.533 [2024-05-15 12:40:45.463630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.463747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.463768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.533 [2024-05-15 12:40:45.463786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:36.533 [2024-05-15 12:40:45.463797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.472709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.472773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.533 [2024-05-15 12:40:45.472792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.849 ms 00:19:36.533 [2024-05-15 12:40:45.472804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.473005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.473035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.533 [2024-05-15 12:40:45.473050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:36.533 [2024-05-15 12:40:45.473062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.473119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.473135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.533 [2024-05-15 12:40:45.473148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:36.533 [2024-05-15 12:40:45.473160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.473200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:36.533 [2024-05-15 12:40:45.478156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.478198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.533 [2024-05-15 12:40:45.478215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:19:36.533 [2024-05-15 12:40:45.478228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.478321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.478346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.533 [2024-05-15 12:40:45.478359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:36.533 [2024-05-15 12:40:45.478371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.478404] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.533 [2024-05-15 12:40:45.478434] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:36.533 [2024-05-15 12:40:45.478475] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.533 [2024-05-15 12:40:45.478519] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:36.533 [2024-05-15 12:40:45.478609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:36.533 [2024-05-15 12:40:45.478625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.533 [2024-05-15 12:40:45.478640] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:36.533 [2024-05-15 12:40:45.478655] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.533 [2024-05-15 12:40:45.478669] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.533 [2024-05-15 12:40:45.478681] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:36.533 [2024-05-15 12:40:45.478692] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.533 [2024-05-15 12:40:45.478704] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:36.533 [2024-05-15 12:40:45.478715] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:36.533 [2024-05-15 12:40:45.478727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.478738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.533 [2024-05-15 12:40:45.478755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:36.533 [2024-05-15 12:40:45.478766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.478846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.533 [2024-05-15 12:40:45.478862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.533 [2024-05-15 12:40:45.478873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:36.533 [2024-05-15 12:40:45.478884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.533 [2024-05-15 12:40:45.478975] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.533 [2024-05-15 12:40:45.478998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.533 [2024-05-15 12:40:45.479011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.533 [2024-05-15 12:40:45.479051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.533 [2024-05-15 12:40:45.479082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.533 [2024-05-15 12:40:45.479103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.533 [2024-05-15 12:40:45.479113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:36.533 [2024-05-15 12:40:45.479124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.533 [2024-05-15 12:40:45.479134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.533 [2024-05-15 12:40:45.479144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:36.533 [2024-05-15 12:40:45.479157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.533 [2024-05-15 12:40:45.479178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:36.533 [2024-05-15 12:40:45.479189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:36.533 [2024-05-15 12:40:45.479223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:36.533 [2024-05-15 12:40:45.479234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.533 [2024-05-15 12:40:45.479256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.533 [2024-05-15 12:40:45.479287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.533 [2024-05-15 12:40:45.479318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.533 [2024-05-15 12:40:45.479350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.533 [2024-05-15 12:40:45.479381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.533 [2024-05-15 12:40:45.479404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.533 [2024-05-15 12:40:45.479415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:36.533 [2024-05-15 12:40:45.479426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.533 [2024-05-15 12:40:45.479436] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.533 [2024-05-15 12:40:45.479448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.533 [2024-05-15 12:40:45.479459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.533 [2024-05-15 12:40:45.479471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.533 [2024-05-15 12:40:45.479483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.533 [2024-05-15 12:40:45.479507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.533 [2024-05-15 12:40:45.479521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.533 [2024-05-15 12:40:45.479533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.533 [2024-05-15 12:40:45.479544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.534 [2024-05-15 12:40:45.479555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.534 [2024-05-15 12:40:45.479567] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.534 [2024-05-15 12:40:45.479589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.534 [2024-05-15 12:40:45.479602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:36.534 [2024-05-15 12:40:45.479614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:36.534 [2024-05-15 12:40:45.479626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:36.534 [2024-05-15 12:40:45.479638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:36.534 [2024-05-15 12:40:45.479650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:36.534 [2024-05-15 12:40:45.479662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:36.534 [2024-05-15 12:40:45.479674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:36.534 [2024-05-15 12:40:45.479685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:36.534 [2024-05-15 12:40:45.479697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:36.534 [2024-05-15 12:40:45.479709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:36.534 [2024-05-15 12:40:45.479721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:36.534 [2024-05-15 12:40:45.479733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:36.534 [2024-05-15 12:40:45.479745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:36.534 [2024-05-15 12:40:45.479757] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.534 [2024-05-15 12:40:45.479771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.534 [2024-05-15 12:40:45.479784] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.534 [2024-05-15 12:40:45.479796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.534 [2024-05-15 12:40:45.479808] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.534 [2024-05-15 12:40:45.479820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.534 [2024-05-15 12:40:45.479833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.534 [2024-05-15 12:40:45.479852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.534 [2024-05-15 12:40:45.479864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:19:36.534 [2024-05-15 12:40:45.479876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.534 [2024-05-15 12:40:45.502322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.534 [2024-05-15 12:40:45.502552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.534 [2024-05-15 12:40:45.502675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:19:36.534 [2024-05-15 12:40:45.502727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.534 [2024-05-15 12:40:45.503012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.534 [2024-05-15 12:40:45.503141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.534 [2024-05-15 12:40:45.503248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:36.534 [2024-05-15 12:40:45.503306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.555845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.556205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.793 [2024-05-15 12:40:45.556348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.418 ms 00:19:36.793 [2024-05-15 12:40:45.556403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.556672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.556791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.793 [2024-05-15 12:40:45.556917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.793 [2024-05-15 12:40:45.556941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.557596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.557626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.793 [2024-05-15 12:40:45.557642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:19:36.793 [2024-05-15 12:40:45.557654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.557818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.557838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.793 [2024-05-15 12:40:45.557851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:36.793 [2024-05-15 12:40:45.557870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.579197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.579255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.793 [2024-05-15 12:40:45.579275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.293 ms 00:19:36.793 [2024-05-15 12:40:45.579288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.596507] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:36.793 [2024-05-15 12:40:45.596586] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.793 [2024-05-15 12:40:45.596610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.596623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.793 [2024-05-15 12:40:45.596637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.113 ms 00:19:36.793 [2024-05-15 12:40:45.596648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.624869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.624930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.793 [2024-05-15 12:40:45.624948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.111 ms 00:19:36.793 [2024-05-15 12:40:45.624959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.640463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.640576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.793 [2024-05-15 12:40:45.640597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.353 ms 00:19:36.793 [2024-05-15 12:40:45.640609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.655704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.655745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.793 [2024-05-15 12:40:45.655776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.937 ms 00:19:36.793 [2024-05-15 12:40:45.655788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.656325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.656362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.793 [2024-05-15 12:40:45.656380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:19:36.793 [2024-05-15 12:40:45.656392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.738853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.738953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.793 [2024-05-15 12:40:45.738976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.423 ms 00:19:36.793 [2024-05-15 12:40:45.738989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.752411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:36.793 [2024-05-15 12:40:45.773724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.773797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.793 [2024-05-15 12:40:45.773818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.560 ms 00:19:36.793 [2024-05-15 12:40:45.773830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.773980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.774001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:36.793 [2024-05-15 12:40:45.774015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:36.793 [2024-05-15 12:40:45.774027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.774109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.774138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.793 [2024-05-15 12:40:45.774151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:36.793 [2024-05-15 12:40:45.774171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.776696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.776737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:36.793 [2024-05-15 12:40:45.776753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:19:36.793 [2024-05-15 12:40:45.776765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.776808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.776823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.793 [2024-05-15 12:40:45.776835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:36.793 [2024-05-15 12:40:45.776847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.793 [2024-05-15 12:40:45.776901] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:36.793 [2024-05-15 12:40:45.776918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.793 [2024-05-15 12:40:45.776930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:36.793 [2024-05-15 12:40:45.776942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:36.793 [2024-05-15 12:40:45.776953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.053 [2024-05-15 12:40:45.808272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.053 [2024-05-15 12:40:45.808332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.053 [2024-05-15 12:40:45.808352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.288 ms 00:19:37.053 [2024-05-15 12:40:45.808374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.053 [2024-05-15 12:40:45.808534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.053 [2024-05-15 12:40:45.808557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.053 [2024-05-15 12:40:45.808571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:37.053 [2024-05-15 12:40:45.808583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.053 [2024-05-15 12:40:45.809765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.053 [2024-05-15 12:40:45.813804] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.519 ms, result 0 00:19:37.053 [2024-05-15 12:40:45.814696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.053 [2024-05-15 12:40:45.831134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.406  Copying: 24/256 [MB] (24 MBps) Copying: 49/256 [MB] (25 MBps) Copying: 74/256 [MB] (25 MBps) Copying: 99/256 [MB] (24 MBps) Copying: 124/256 [MB] (25 MBps) Copying: 149/256 [MB] (25 MBps) Copying: 174/256 [MB] (25 MBps) Copying: 199/256 [MB] (24 MBps) Copying: 223/256 [MB] (23 MBps) Copying: 247/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-05-15 12:40:56.178989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.406 [2024-05-15 12:40:56.191401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.191443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.406 [2024-05-15 12:40:56.191480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.406 [2024-05-15 12:40:56.191493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.191550] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.406 [2024-05-15 12:40:56.195058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.195092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.406 [2024-05-15 12:40:56.195123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:19:47.406 [2024-05-15 12:40:56.195135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.196895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.196934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.406 [2024-05-15 12:40:56.196967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:19:47.406 [2024-05-15 12:40:56.196979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.203774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.203814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.406 [2024-05-15 12:40:56.203855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.771 ms 00:19:47.406 [2024-05-15 12:40:56.203867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.211123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.211160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:47.406 [2024-05-15 12:40:56.211191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.181 ms 00:19:47.406 [2024-05-15 12:40:56.211202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.240068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.240128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.406 [2024-05-15 12:40:56.240161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.795 ms 00:19:47.406 [2024-05-15 12:40:56.240172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.257066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.257108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.406 [2024-05-15 12:40:56.257141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.798 ms 00:19:47.406 [2024-05-15 12:40:56.257159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.257328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.257348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.406 [2024-05-15 12:40:56.257361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:47.406 [2024-05-15 12:40:56.257372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.286993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.287034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:47.406 [2024-05-15 12:40:56.287067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.598 ms 00:19:47.406 [2024-05-15 12:40:56.287092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.315949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.315990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:47.406 [2024-05-15 12:40:56.316037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.779 ms 00:19:47.406 [2024-05-15 12:40:56.316048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.346312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.346373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.406 [2024-05-15 12:40:56.346407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.188 ms 00:19:47.406 [2024-05-15 12:40:56.346419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.377054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.406 [2024-05-15 12:40:56.377099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.406 [2024-05-15 12:40:56.377117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.486 ms 00:19:47.406 [2024-05-15 12:40:56.377128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.406 [2024-05-15 12:40:56.377210] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.406 [2024-05-15 12:40:56.377257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.406 [2024-05-15 12:40:56.377273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.406 [2024-05-15 12:40:56.377285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.406 [2024-05-15 12:40:56.377297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.377998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.407 [2024-05-15 12:40:56.378330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.408 [2024-05-15 12:40:56.378604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.408 [2024-05-15 12:40:56.378616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:19:47.408 [2024-05-15 12:40:56.378644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.408 [2024-05-15 12:40:56.378655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.408 [2024-05-15 12:40:56.378666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.408 [2024-05-15 12:40:56.378677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.408 [2024-05-15 12:40:56.378688] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.408 [2024-05-15 12:40:56.378700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.408 [2024-05-15 12:40:56.378712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.408 [2024-05-15 12:40:56.378722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.408 [2024-05-15 12:40:56.378732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.408 [2024-05-15 12:40:56.378744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.408 [2024-05-15 12:40:56.378756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.408 [2024-05-15 12:40:56.378775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:19:47.408 [2024-05-15 12:40:56.378787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.408 [2024-05-15 12:40:56.396140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.408 [2024-05-15 12:40:56.396185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.408 [2024-05-15 12:40:56.396203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.325 ms 00:19:47.408 [2024-05-15 12:40:56.396231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.408 [2024-05-15 12:40:56.396560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.408 [2024-05-15 12:40:56.396588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.408 [2024-05-15 12:40:56.396602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:47.408 [2024-05-15 12:40:56.396614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.447050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.447127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.667 [2024-05-15 12:40:56.447146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.447158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.447302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.447327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.667 [2024-05-15 12:40:56.447339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.447350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.447413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.447431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.667 [2024-05-15 12:40:56.447444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.447455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.447480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.447493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.667 [2024-05-15 12:40:56.447550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.447564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.553260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.553319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.667 [2024-05-15 12:40:56.553354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.553367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.592345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.592407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.667 [2024-05-15 12:40:56.592426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.592438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.592560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.592580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.667 [2024-05-15 12:40:56.592594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.592606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.592646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.592660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.667 [2024-05-15 12:40:56.592672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.592691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.592817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.592836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.667 [2024-05-15 12:40:56.592848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.592860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.592911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.592929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.667 [2024-05-15 12:40:56.592942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.667 [2024-05-15 12:40:56.592953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.667 [2024-05-15 12:40:56.593009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.667 [2024-05-15 12:40:56.593031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.667 [2024-05-15 12:40:56.593044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-05-15 12:40:56.593056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-05-15 12:40:56.593114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.668 [2024-05-15 12:40:56.593130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.668 [2024-05-15 12:40:56.593142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.668 [2024-05-15 12:40:56.593161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.668 [2024-05-15 12:40:56.593342] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.939 ms, result 0 00:19:49.108 00:19:49.108 00:19:49.108 12:40:57 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:49.108 12:40:57 -- ftl/trim.sh@72 -- # svcpid=74170 00:19:49.108 12:40:57 -- ftl/trim.sh@73 -- # waitforlisten 74170 00:19:49.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.108 12:40:57 -- common/autotest_common.sh@819 -- # '[' -z 74170 ']' 00:19:49.108 12:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.108 12:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.108 12:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.108 12:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.108 12:40:57 -- common/autotest_common.sh@10 -- # set +x 00:19:49.108 [2024-05-15 12:40:58.052545] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:49.108 [2024-05-15 12:40:58.053021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74170 ] 00:19:49.366 [2024-05-15 12:40:58.220409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.625 [2024-05-15 12:40:58.453714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.625 [2024-05-15 12:40:58.453970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.002 12:40:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.002 12:40:59 -- common/autotest_common.sh@852 -- # return 0 00:19:51.002 12:40:59 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:51.002 [2024-05-15 12:40:59.921138] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.002 [2024-05-15 12:40:59.921229] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.262 [2024-05-15 12:41:00.084013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.084078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.262 [2024-05-15 12:41:00.084110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:51.262 [2024-05-15 12:41:00.084125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.088177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.088223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.262 [2024-05-15 12:41:00.088250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:19:51.262 [2024-05-15 12:41:00.088264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.088473] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.262 [2024-05-15 12:41:00.089425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.262 [2024-05-15 12:41:00.089473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.089511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.262 [2024-05-15 12:41:00.089543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:19:51.262 [2024-05-15 12:41:00.089556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.091534] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:51.262 [2024-05-15 12:41:00.108431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.108490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:51.262 [2024-05-15 12:41:00.108527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:19:51.262 [2024-05-15 12:41:00.108547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.108678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.108707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:51.262 [2024-05-15 12:41:00.108723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:51.262 [2024-05-15 12:41:00.108741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.117549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.117624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.262 [2024-05-15 12:41:00.117642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.728 ms 00:19:51.262 [2024-05-15 12:41:00.117664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.117822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.117856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.262 [2024-05-15 12:41:00.117877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:51.262 [2024-05-15 12:41:00.117895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.117940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.117964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:51.262 [2024-05-15 12:41:00.117986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:51.262 [2024-05-15 12:41:00.118004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.118056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:51.262 [2024-05-15 12:41:00.123095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.123136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.262 [2024-05-15 12:41:00.123160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.051 ms 00:19:51.262 [2024-05-15 12:41:00.123174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.123281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.262 [2024-05-15 12:41:00.123302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:51.262 [2024-05-15 12:41:00.123321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:51.262 [2024-05-15 12:41:00.123335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.262 [2024-05-15 12:41:00.123375] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:51.262 [2024-05-15 12:41:00.123410] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:51.262 [2024-05-15 12:41:00.123465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:51.263 [2024-05-15 12:41:00.123490] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:51.263 [2024-05-15 12:41:00.123610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:51.263 [2024-05-15 12:41:00.123630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:51.263 [2024-05-15 12:41:00.123652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:51.263 [2024-05-15 12:41:00.123669] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:51.263 [2024-05-15 12:41:00.123689] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:51.263 [2024-05-15 12:41:00.123708] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:51.263 [2024-05-15 12:41:00.123726] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:51.263 [2024-05-15 12:41:00.123739] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:51.263 [2024-05-15 12:41:00.123760] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:51.263 [2024-05-15 12:41:00.123774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.123792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:51.263 [2024-05-15 12:41:00.123807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:19:51.263 [2024-05-15 12:41:00.123824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.263 [2024-05-15 12:41:00.123911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.123934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:51.263 [2024-05-15 12:41:00.123948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:51.263 [2024-05-15 12:41:00.123972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.263 [2024-05-15 12:41:00.124066] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:51.263 [2024-05-15 12:41:00.124105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:51.263 [2024-05-15 12:41:00.124120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:51.263 [2024-05-15 12:41:00.124172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:51.263 [2024-05-15 12:41:00.124219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.263 [2024-05-15 12:41:00.124247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:51.263 [2024-05-15 12:41:00.124264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:51.263 [2024-05-15 12:41:00.124276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.263 [2024-05-15 12:41:00.124292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:51.263 [2024-05-15 12:41:00.124304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:51.263 [2024-05-15 12:41:00.124321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:51.263 [2024-05-15 12:41:00.124350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:51.263 [2024-05-15 12:41:00.124361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:51.263 [2024-05-15 12:41:00.124389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:51.263 [2024-05-15 12:41:00.124406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:51.263 [2024-05-15 12:41:00.124439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:51.263 [2024-05-15 12:41:00.124479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:51.263 [2024-05-15 12:41:00.124542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:51.263 [2024-05-15 12:41:00.124602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:51.263 [2024-05-15 12:41:00.124648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.263 [2024-05-15 12:41:00.124678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:51.263 [2024-05-15 12:41:00.124690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:51.263 [2024-05-15 12:41:00.124712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.263 [2024-05-15 12:41:00.124724] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:51.263 [2024-05-15 12:41:00.124742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:51.263 [2024-05-15 12:41:00.124755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.263 [2024-05-15 12:41:00.124790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:51.263 [2024-05-15 12:41:00.124807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:51.263 [2024-05-15 12:41:00.124819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:51.263 [2024-05-15 12:41:00.124836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:51.263 [2024-05-15 12:41:00.124848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:51.263 [2024-05-15 12:41:00.124865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:51.263 [2024-05-15 12:41:00.124879] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:51.263 [2024-05-15 12:41:00.124900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.263 [2024-05-15 12:41:00.124915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:51.263 [2024-05-15 12:41:00.124933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:51.263 [2024-05-15 12:41:00.124947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:51.263 [2024-05-15 12:41:00.124971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:51.263 [2024-05-15 12:41:00.124985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:51.263 [2024-05-15 12:41:00.125003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:51.263 [2024-05-15 12:41:00.125016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:51.263 [2024-05-15 12:41:00.125035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:51.263 [2024-05-15 12:41:00.125048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:51.263 [2024-05-15 12:41:00.125066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:51.263 [2024-05-15 12:41:00.125080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:51.263 [2024-05-15 12:41:00.125097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:51.263 [2024-05-15 12:41:00.125112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:51.263 [2024-05-15 12:41:00.125129] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:51.263 [2024-05-15 12:41:00.125145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.263 [2024-05-15 12:41:00.125164] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:51.263 [2024-05-15 12:41:00.125178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:51.263 [2024-05-15 12:41:00.125196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:51.263 [2024-05-15 12:41:00.125210] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:51.263 [2024-05-15 12:41:00.125234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.125248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:51.263 [2024-05-15 12:41:00.125266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:19:51.263 [2024-05-15 12:41:00.125279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.263 [2024-05-15 12:41:00.148485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.148695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.263 [2024-05-15 12:41:00.148829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.128 ms 00:19:51.263 [2024-05-15 12:41:00.148958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.263 [2024-05-15 12:41:00.149197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.149264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.263 [2024-05-15 12:41:00.149393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:51.263 [2024-05-15 12:41:00.149535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.263 [2024-05-15 12:41:00.195035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.263 [2024-05-15 12:41:00.195234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.264 [2024-05-15 12:41:00.195366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.409 ms 00:19:51.264 [2024-05-15 12:41:00.195424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.195609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.195669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.264 [2024-05-15 12:41:00.195723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.264 [2024-05-15 12:41:00.195834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.196487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.196634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.264 [2024-05-15 12:41:00.196775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:19:51.264 [2024-05-15 12:41:00.196908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.197126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.197238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.264 [2024-05-15 12:41:00.197394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:51.264 [2024-05-15 12:41:00.197450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.220281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.220469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.264 [2024-05-15 12:41:00.220649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:19:51.264 [2024-05-15 12:41:00.220775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.238044] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:51.264 [2024-05-15 12:41:00.238239] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:51.264 [2024-05-15 12:41:00.238392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.238590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:51.264 [2024-05-15 12:41:00.238622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:19:51.264 [2024-05-15 12:41:00.238637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.264 [2024-05-15 12:41:00.268163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.264 [2024-05-15 12:41:00.268245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:51.264 [2024-05-15 12:41:00.268274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.427 ms 00:19:51.264 [2024-05-15 12:41:00.268288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.283690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.283732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:51.523 [2024-05-15 12:41:00.283770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.298 ms 00:19:51.523 [2024-05-15 12:41:00.283782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.298859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.298901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:51.523 [2024-05-15 12:41:00.298925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.983 ms 00:19:51.523 [2024-05-15 12:41:00.298937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.299448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.299487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.523 [2024-05-15 12:41:00.299529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:19:51.523 [2024-05-15 12:41:00.299543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.385747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.385822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:51.523 [2024-05-15 12:41:00.385857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.160 ms 00:19:51.523 [2024-05-15 12:41:00.385872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.398393] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:51.523 [2024-05-15 12:41:00.419624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.419718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.523 [2024-05-15 12:41:00.419741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.594 ms 00:19:51.523 [2024-05-15 12:41:00.419757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.419893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.419920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:51.523 [2024-05-15 12:41:00.419935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:51.523 [2024-05-15 12:41:00.419950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.420023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.420047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.523 [2024-05-15 12:41:00.420061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:51.523 [2024-05-15 12:41:00.420075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.423254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.423303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:51.523 [2024-05-15 12:41:00.423336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:19:51.523 [2024-05-15 12:41:00.423354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.423396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.423419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.523 [2024-05-15 12:41:00.423438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.523 [2024-05-15 12:41:00.423463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.423540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:51.523 [2024-05-15 12:41:00.423572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.423587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:51.523 [2024-05-15 12:41:00.423605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:51.523 [2024-05-15 12:41:00.423618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.455439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.455508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.523 [2024-05-15 12:41:00.455537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.768 ms 00:19:51.523 [2024-05-15 12:41:00.455551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.455695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-05-15 12:41:00.455717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.523 [2024-05-15 12:41:00.455737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:51.523 [2024-05-15 12:41:00.455750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-05-15 12:41:00.456999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.523 [2024-05-15 12:41:00.461220] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.538 ms, result 0 00:19:51.523 [2024-05-15 12:41:00.462805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.523 Some configs were skipped because the RPC state that can call them passed over. 00:19:51.523 12:41:00 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:51.813 [2024-05-15 12:41:00.731865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.813 [2024-05-15 12:41:00.731943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:51.813 [2024-05-15 12:41:00.731967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.754 ms 00:19:51.813 [2024-05-15 12:41:00.731983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.813 [2024-05-15 12:41:00.732043] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.930 ms, result 0 00:19:51.813 true 00:19:51.813 12:41:00 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:52.072 [2024-05-15 12:41:00.984283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.072 [2024-05-15 12:41:00.984348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:52.072 [2024-05-15 12:41:00.984373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.938 ms 00:19:52.072 [2024-05-15 12:41:00.984386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.072 [2024-05-15 12:41:00.984446] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.103 ms, result 0 00:19:52.072 true 00:19:52.072 12:41:01 -- ftl/trim.sh@81 -- # killprocess 74170 00:19:52.072 12:41:01 -- common/autotest_common.sh@926 -- # '[' -z 74170 ']' 00:19:52.072 12:41:01 -- common/autotest_common.sh@930 -- # kill -0 74170 00:19:52.072 12:41:01 -- common/autotest_common.sh@931 -- # uname 00:19:52.072 12:41:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.072 12:41:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74170 00:19:52.072 killing process with pid 74170 00:19:52.072 12:41:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.072 12:41:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.072 12:41:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74170' 00:19:52.072 12:41:01 -- common/autotest_common.sh@945 -- # kill 74170 00:19:52.072 12:41:01 -- common/autotest_common.sh@950 -- # wait 74170 00:19:53.450 [2024-05-15 12:41:02.054061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.054151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.450 [2024-05-15 12:41:02.054190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.450 [2024-05-15 12:41:02.054205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.054255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.450 [2024-05-15 12:41:02.057857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.057894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.450 [2024-05-15 12:41:02.057920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:19:53.450 [2024-05-15 12:41:02.057932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.058232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.058251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.450 [2024-05-15 12:41:02.058266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:53.450 [2024-05-15 12:41:02.058278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.064966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.065011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.450 [2024-05-15 12:41:02.065031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 00:19:53.450 [2024-05-15 12:41:02.065044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.072371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.072408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:53.450 [2024-05-15 12:41:02.072430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.271 ms 00:19:53.450 [2024-05-15 12:41:02.072443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.085121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.085165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.450 [2024-05-15 12:41:02.085202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.583 ms 00:19:53.450 [2024-05-15 12:41:02.085215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.094491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.094562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.450 [2024-05-15 12:41:02.094599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.224 ms 00:19:53.450 [2024-05-15 12:41:02.094616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.094798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.094819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.450 [2024-05-15 12:41:02.094836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:53.450 [2024-05-15 12:41:02.094848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.107673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.107712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:53.450 [2024-05-15 12:41:02.107737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:19:53.450 [2024-05-15 12:41:02.107750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.120359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.120397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:53.450 [2024-05-15 12:41:02.120429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.553 ms 00:19:53.450 [2024-05-15 12:41:02.120442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.450 [2024-05-15 12:41:02.132626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.450 [2024-05-15 12:41:02.132666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.451 [2024-05-15 12:41:02.132705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.113 ms 00:19:53.451 [2024-05-15 12:41:02.132717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.451 [2024-05-15 12:41:02.144501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.451 [2024-05-15 12:41:02.144538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.451 [2024-05-15 12:41:02.144578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.684 ms 00:19:53.451 [2024-05-15 12:41:02.144590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.451 [2024-05-15 12:41:02.144657] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.451 [2024-05-15 12:41:02.144682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.144994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.145989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.451 [2024-05-15 12:41:02.146130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.452 [2024-05-15 12:41:02.146374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.452 [2024-05-15 12:41:02.146414] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:19:53.452 [2024-05-15 12:41:02.146430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.452 [2024-05-15 12:41:02.146455] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.452 [2024-05-15 12:41:02.146468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.452 [2024-05-15 12:41:02.146485] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.452 [2024-05-15 12:41:02.146510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.452 [2024-05-15 12:41:02.146529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.452 [2024-05-15 12:41:02.146542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.452 [2024-05-15 12:41:02.146558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.452 [2024-05-15 12:41:02.146569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.452 [2024-05-15 12:41:02.146588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.452 [2024-05-15 12:41:02.146601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.452 [2024-05-15 12:41:02.146620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:19:53.452 [2024-05-15 12:41:02.146633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.163464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.452 [2024-05-15 12:41:02.163572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.452 [2024-05-15 12:41:02.163604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.781 ms 00:19:53.452 [2024-05-15 12:41:02.163618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.163938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.452 [2024-05-15 12:41:02.163967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.452 [2024-05-15 12:41:02.163990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:53.452 [2024-05-15 12:41:02.164004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.224783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.224848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.452 [2024-05-15 12:41:02.224887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.224900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.225036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.225055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.452 [2024-05-15 12:41:02.225071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.225083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.225156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.225175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.452 [2024-05-15 12:41:02.225203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.225217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.225253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.225268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.452 [2024-05-15 12:41:02.225286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.225298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.335115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.335189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.452 [2024-05-15 12:41:02.335213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.335227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.452 [2024-05-15 12:41:02.375093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.452 [2024-05-15 12:41:02.375251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.452 [2024-05-15 12:41:02.375337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.452 [2024-05-15 12:41:02.375559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.452 [2024-05-15 12:41:02.375682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.452 [2024-05-15 12:41:02.375783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.375856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.452 [2024-05-15 12:41:02.375873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.452 [2024-05-15 12:41:02.375888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.452 [2024-05-15 12:41:02.375900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.452 [2024-05-15 12:41:02.376079] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.990 ms, result 0 00:19:54.829 12:41:03 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:54.829 12:41:03 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.829 [2024-05-15 12:41:03.649150] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:54.829 [2024-05-15 12:41:03.649312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74240 ] 00:19:54.829 [2024-05-15 12:41:03.825430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.397 [2024-05-15 12:41:04.108720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.657 [2024-05-15 12:41:04.456643] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.657 [2024-05-15 12:41:04.456726] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.657 [2024-05-15 12:41:04.615165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.615223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.657 [2024-05-15 12:41:04.615244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:55.657 [2024-05-15 12:41:04.615262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.618596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.618642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.657 [2024-05-15 12:41:04.618659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:19:55.657 [2024-05-15 12:41:04.618671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.618830] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.657 [2024-05-15 12:41:04.619796] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.657 [2024-05-15 12:41:04.619837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.619857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.657 [2024-05-15 12:41:04.619870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:19:55.657 [2024-05-15 12:41:04.619882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.621828] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.657 [2024-05-15 12:41:04.638527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.638583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.657 [2024-05-15 12:41:04.638613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.700 ms 00:19:55.657 [2024-05-15 12:41:04.638627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.638768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.638790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.657 [2024-05-15 12:41:04.638809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:55.657 [2024-05-15 12:41:04.638822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.647563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.647632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.657 [2024-05-15 12:41:04.647650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.674 ms 00:19:55.657 [2024-05-15 12:41:04.647663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.647846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.647873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.657 [2024-05-15 12:41:04.647887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:55.657 [2024-05-15 12:41:04.647905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.647954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.647969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.657 [2024-05-15 12:41:04.647982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.657 [2024-05-15 12:41:04.647994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.648045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.657 [2024-05-15 12:41:04.653172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.653212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.657 [2024-05-15 12:41:04.653228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:19:55.657 [2024-05-15 12:41:04.653240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.653333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.653356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.657 [2024-05-15 12:41:04.653369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:55.657 [2024-05-15 12:41:04.653381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.653422] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.657 [2024-05-15 12:41:04.653459] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:55.657 [2024-05-15 12:41:04.653541] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.657 [2024-05-15 12:41:04.653575] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:55.657 [2024-05-15 12:41:04.653671] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:55.657 [2024-05-15 12:41:04.653687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.657 [2024-05-15 12:41:04.653702] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:55.657 [2024-05-15 12:41:04.653718] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.657 [2024-05-15 12:41:04.653732] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.657 [2024-05-15 12:41:04.653745] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.657 [2024-05-15 12:41:04.653757] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.657 [2024-05-15 12:41:04.653769] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:55.657 [2024-05-15 12:41:04.653780] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:55.657 [2024-05-15 12:41:04.653792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.653805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.657 [2024-05-15 12:41:04.653821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:19:55.657 [2024-05-15 12:41:04.653833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.653915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.657 [2024-05-15 12:41:04.653931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.657 [2024-05-15 12:41:04.653943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:55.657 [2024-05-15 12:41:04.653956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.657 [2024-05-15 12:41:04.654045] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.657 [2024-05-15 12:41:04.654062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.657 [2024-05-15 12:41:04.654075] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.657 [2024-05-15 12:41:04.654092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.657 [2024-05-15 12:41:04.654104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.657 [2024-05-15 12:41:04.654115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.657 [2024-05-15 12:41:04.654126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.657 [2024-05-15 12:41:04.654136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.657 [2024-05-15 12:41:04.654147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.657 [2024-05-15 12:41:04.654157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.657 [2024-05-15 12:41:04.654168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.657 [2024-05-15 12:41:04.654180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.657 [2024-05-15 12:41:04.654190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.657 [2024-05-15 12:41:04.654202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.657 [2024-05-15 12:41:04.654213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:55.657 [2024-05-15 12:41:04.654224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.657 [2024-05-15 12:41:04.654234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.657 [2024-05-15 12:41:04.654245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:55.658 [2024-05-15 12:41:04.654256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:55.658 [2024-05-15 12:41:04.654291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:55.658 [2024-05-15 12:41:04.654302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.658 [2024-05-15 12:41:04.654324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.658 [2024-05-15 12:41:04.654355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.658 [2024-05-15 12:41:04.654387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.658 [2024-05-15 12:41:04.654418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.658 [2024-05-15 12:41:04.654450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.658 [2024-05-15 12:41:04.654471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.658 [2024-05-15 12:41:04.654482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:55.658 [2024-05-15 12:41:04.654508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.658 [2024-05-15 12:41:04.654521] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.658 [2024-05-15 12:41:04.654533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.658 [2024-05-15 12:41:04.654547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.658 [2024-05-15 12:41:04.654571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.658 [2024-05-15 12:41:04.654582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.658 [2024-05-15 12:41:04.654592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.658 [2024-05-15 12:41:04.654604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.658 [2024-05-15 12:41:04.654615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.658 [2024-05-15 12:41:04.654626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.658 [2024-05-15 12:41:04.654637] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.658 [2024-05-15 12:41:04.654662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.658 [2024-05-15 12:41:04.654685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.658 [2024-05-15 12:41:04.654697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:55.658 [2024-05-15 12:41:04.654708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.658 [2024-05-15 12:41:04.654729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:55.658 [2024-05-15 12:41:04.654740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:55.658 [2024-05-15 12:41:04.654752] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:55.658 [2024-05-15 12:41:04.654763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:55.658 [2024-05-15 12:41:04.654775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:55.658 [2024-05-15 12:41:04.654786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:55.658 [2024-05-15 12:41:04.654798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:55.658 [2024-05-15 12:41:04.654809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:55.658 [2024-05-15 12:41:04.654821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:55.658 [2024-05-15 12:41:04.654834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:55.658 [2024-05-15 12:41:04.654845] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.658 [2024-05-15 12:41:04.654859] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.658 [2024-05-15 12:41:04.654871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.658 [2024-05-15 12:41:04.654889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.658 [2024-05-15 12:41:04.654901] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.658 [2024-05-15 12:41:04.654913] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.658 [2024-05-15 12:41:04.654926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.658 [2024-05-15 12:41:04.654942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.658 [2024-05-15 12:41:04.654954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:19:55.658 [2024-05-15 12:41:04.654967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.677374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.677440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.917 [2024-05-15 12:41:04.677461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.339 ms 00:19:55.917 [2024-05-15 12:41:04.677473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.677689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.677710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.917 [2024-05-15 12:41:04.677723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:55.917 [2024-05-15 12:41:04.677735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.731444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.731528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.917 [2024-05-15 12:41:04.731551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.673 ms 00:19:55.917 [2024-05-15 12:41:04.731564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.731713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.731732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.917 [2024-05-15 12:41:04.731746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.917 [2024-05-15 12:41:04.731757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.732371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.732395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.917 [2024-05-15 12:41:04.732409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:19:55.917 [2024-05-15 12:41:04.732421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.732614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.732639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.917 [2024-05-15 12:41:04.732653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:55.917 [2024-05-15 12:41:04.732664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.753112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.753166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.917 [2024-05-15 12:41:04.753186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.415 ms 00:19:55.917 [2024-05-15 12:41:04.753199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.769927] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.917 [2024-05-15 12:41:04.769975] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.917 [2024-05-15 12:41:04.769994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.770007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.917 [2024-05-15 12:41:04.770020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.621 ms 00:19:55.917 [2024-05-15 12:41:04.770031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.799135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.799180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.917 [2024-05-15 12:41:04.799197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.012 ms 00:19:55.917 [2024-05-15 12:41:04.799217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.814528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.814570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.917 [2024-05-15 12:41:04.814587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.217 ms 00:19:55.917 [2024-05-15 12:41:04.814599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.829601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.829669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.917 [2024-05-15 12:41:04.829687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.915 ms 00:19:55.917 [2024-05-15 12:41:04.829699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.917 [2024-05-15 12:41:04.830200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.917 [2024-05-15 12:41:04.830228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.918 [2024-05-15 12:41:04.830243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:19:55.918 [2024-05-15 12:41:04.830255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.918 [2024-05-15 12:41:04.910324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.918 [2024-05-15 12:41:04.910399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.918 [2024-05-15 12:41:04.910436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.034 ms 00:19:55.918 [2024-05-15 12:41:04.910449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.918 [2024-05-15 12:41:04.923306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:56.176 [2024-05-15 12:41:04.945278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.945357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.176 [2024-05-15 12:41:04.945378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.667 ms 00:19:56.176 [2024-05-15 12:41:04.945392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.945594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.945618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:56.176 [2024-05-15 12:41:04.945633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:56.176 [2024-05-15 12:41:04.945645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.945723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.945745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.176 [2024-05-15 12:41:04.945758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:56.176 [2024-05-15 12:41:04.945770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.947952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.947990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:56.176 [2024-05-15 12:41:04.948005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:19:56.176 [2024-05-15 12:41:04.948017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.948058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.948073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.176 [2024-05-15 12:41:04.948085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:56.176 [2024-05-15 12:41:04.948103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.948147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:56.176 [2024-05-15 12:41:04.948164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.948176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:56.176 [2024-05-15 12:41:04.948188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:56.176 [2024-05-15 12:41:04.948199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.979461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.979541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.176 [2024-05-15 12:41:04.979569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.229 ms 00:19:56.176 [2024-05-15 12:41:04.979581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.979709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.176 [2024-05-15 12:41:04.979729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.176 [2024-05-15 12:41:04.979743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:56.176 [2024-05-15 12:41:04.979755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.176 [2024-05-15 12:41:04.980897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.176 [2024-05-15 12:41:04.985001] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.351 ms, result 0 00:19:56.176 [2024-05-15 12:41:04.985815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.176 [2024-05-15 12:41:05.002415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.566  Copying: 28/256 [MB] (28 MBps) Copying: 53/256 [MB] (25 MBps) Copying: 78/256 [MB] (25 MBps) Copying: 103/256 [MB] (25 MBps) Copying: 128/256 [MB] (25 MBps) Copying: 153/256 [MB] (24 MBps) Copying: 176/256 [MB] (23 MBps) Copying: 199/256 [MB] (22 MBps) Copying: 223/256 [MB] (24 MBps) Copying: 247/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-05-15 12:41:15.399495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.566 [2024-05-15 12:41:15.412083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.412129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.566 [2024-05-15 12:41:15.412167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.566 [2024-05-15 12:41:15.412188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.412222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:06.566 [2024-05-15 12:41:15.415930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.415966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.566 [2024-05-15 12:41:15.415982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:20:06.566 [2024-05-15 12:41:15.415994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.416328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.416348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.566 [2024-05-15 12:41:15.416362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:06.566 [2024-05-15 12:41:15.416374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.420247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.420313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.566 [2024-05-15 12:41:15.420347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:20:06.566 [2024-05-15 12:41:15.420360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.427642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.427676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:06.566 [2024-05-15 12:41:15.427708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.234 ms 00:20:06.566 [2024-05-15 12:41:15.427720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.458407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.458450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.566 [2024-05-15 12:41:15.458483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.611 ms 00:20:06.566 [2024-05-15 12:41:15.458495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.476762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.476805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.566 [2024-05-15 12:41:15.476845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.138 ms 00:20:06.566 [2024-05-15 12:41:15.476857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.477037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.477058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.566 [2024-05-15 12:41:15.477071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:06.566 [2024-05-15 12:41:15.477082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.507752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.507793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:06.566 [2024-05-15 12:41:15.507841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.645 ms 00:20:06.566 [2024-05-15 12:41:15.507853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.538817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.538890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:06.566 [2024-05-15 12:41:15.538909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.877 ms 00:20:06.566 [2024-05-15 12:41:15.538921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.566 [2024-05-15 12:41:15.568773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.566 [2024-05-15 12:41:15.568833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.566 [2024-05-15 12:41:15.568868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.733 ms 00:20:06.566 [2024-05-15 12:41:15.568879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.826 [2024-05-15 12:41:15.597263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.826 [2024-05-15 12:41:15.597311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.826 [2024-05-15 12:41:15.597344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.250 ms 00:20:06.826 [2024-05-15 12:41:15.597355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.826 [2024-05-15 12:41:15.597433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.826 [2024-05-15 12:41:15.597459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.826 [2024-05-15 12:41:15.597759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.597988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.827 [2024-05-15 12:41:15.598471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.828 [2024-05-15 12:41:15.598812] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.828 [2024-05-15 12:41:15.598841] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:20:06.828 [2024-05-15 12:41:15.598854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.828 [2024-05-15 12:41:15.598866] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.828 [2024-05-15 12:41:15.598878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.828 [2024-05-15 12:41:15.598890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.828 [2024-05-15 12:41:15.598901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.828 [2024-05-15 12:41:15.598913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.828 [2024-05-15 12:41:15.598925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.828 [2024-05-15 12:41:15.598935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.828 [2024-05-15 12:41:15.598945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.828 [2024-05-15 12:41:15.598957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.828 [2024-05-15 12:41:15.598975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.828 [2024-05-15 12:41:15.598988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:20:06.828 [2024-05-15 12:41:15.599000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.616093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.828 [2024-05-15 12:41:15.616166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.828 [2024-05-15 12:41:15.616185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:20:06.828 [2024-05-15 12:41:15.616198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.616573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.828 [2024-05-15 12:41:15.616597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.828 [2024-05-15 12:41:15.616612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:06.828 [2024-05-15 12:41:15.616624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.667374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.667444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.828 [2024-05-15 12:41:15.667479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.667491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.667700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.667719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.828 [2024-05-15 12:41:15.667733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.667745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.667813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.667832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.828 [2024-05-15 12:41:15.667845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.667857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.667884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.667906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.828 [2024-05-15 12:41:15.667917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.667928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.774279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.774350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.828 [2024-05-15 12:41:15.774386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.774398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.816767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.816826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.828 [2024-05-15 12:41:15.816847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.816860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.816972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.816991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.828 [2024-05-15 12:41:15.817004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.817072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.828 [2024-05-15 12:41:15.817093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.817262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.828 [2024-05-15 12:41:15.817275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.817366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.828 [2024-05-15 12:41:15.817379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.817466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.828 [2024-05-15 12:41:15.817478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.828 [2024-05-15 12:41:15.817635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.828 [2024-05-15 12:41:15.817665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.828 [2024-05-15 12:41:15.817689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.828 [2024-05-15 12:41:15.817894] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.815 ms, result 0 00:20:08.211 00:20:08.211 00:20:08.211 12:41:16 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:08.211 12:41:17 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:08.776 12:41:17 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:08.776 [2024-05-15 12:41:17.639204] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:08.776 [2024-05-15 12:41:17.639386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74385 ] 00:20:09.034 [2024-05-15 12:41:17.815283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.334 [2024-05-15 12:41:18.072291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.591 [2024-05-15 12:41:18.453230] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.591 [2024-05-15 12:41:18.453320] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.850 [2024-05-15 12:41:18.610378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.610443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:09.850 [2024-05-15 12:41:18.610482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:09.850 [2024-05-15 12:41:18.610500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.613935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.613982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.850 [2024-05-15 12:41:18.614010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:20:09.850 [2024-05-15 12:41:18.614023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.614149] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:09.850 [2024-05-15 12:41:18.615076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:09.850 [2024-05-15 12:41:18.615114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.615133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.850 [2024-05-15 12:41:18.615147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:20:09.850 [2024-05-15 12:41:18.615158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.617159] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:09.850 [2024-05-15 12:41:18.633733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.633774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:09.850 [2024-05-15 12:41:18.633796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.575 ms 00:20:09.850 [2024-05-15 12:41:18.633808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.633927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.633949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:09.850 [2024-05-15 12:41:18.633967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:09.850 [2024-05-15 12:41:18.633980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.643514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.643624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.850 [2024-05-15 12:41:18.643651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.458 ms 00:20:09.850 [2024-05-15 12:41:18.643672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.643880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.643924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.850 [2024-05-15 12:41:18.643948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:09.850 [2024-05-15 12:41:18.643968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.644069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.644094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:09.850 [2024-05-15 12:41:18.644114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:09.850 [2024-05-15 12:41:18.644134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.644197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:09.850 [2024-05-15 12:41:18.650900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.650951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.850 [2024-05-15 12:41:18.650975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.722 ms 00:20:09.850 [2024-05-15 12:41:18.650995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.651088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.850 [2024-05-15 12:41:18.651121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:09.850 [2024-05-15 12:41:18.651142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:09.850 [2024-05-15 12:41:18.651161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.850 [2024-05-15 12:41:18.651210] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:09.850 [2024-05-15 12:41:18.651252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:09.850 [2024-05-15 12:41:18.651311] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:09.850 [2024-05-15 12:41:18.651346] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:09.850 [2024-05-15 12:41:18.651457] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:09.850 [2024-05-15 12:41:18.651508] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:09.850 [2024-05-15 12:41:18.651537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:09.850 [2024-05-15 12:41:18.651561] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:09.850 [2024-05-15 12:41:18.651584] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:09.850 [2024-05-15 12:41:18.651606] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:09.850 [2024-05-15 12:41:18.651625] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:09.850 [2024-05-15 12:41:18.651643] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:09.850 [2024-05-15 12:41:18.651661] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:09.851 [2024-05-15 12:41:18.651682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.651701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:09.851 [2024-05-15 12:41:18.651728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:20:09.851 [2024-05-15 12:41:18.651747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.651853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.651878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:09.851 [2024-05-15 12:41:18.651899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:09.851 [2024-05-15 12:41:18.651918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.652048] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:09.851 [2024-05-15 12:41:18.652074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:09.851 [2024-05-15 12:41:18.652095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:09.851 [2024-05-15 12:41:18.652161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:09.851 [2024-05-15 12:41:18.652217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.851 [2024-05-15 12:41:18.652253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:09.851 [2024-05-15 12:41:18.652271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:09.851 [2024-05-15 12:41:18.652290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.851 [2024-05-15 12:41:18.652308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:09.851 [2024-05-15 12:41:18.652326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:09.851 [2024-05-15 12:41:18.652346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:09.851 [2024-05-15 12:41:18.652384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:09.851 [2024-05-15 12:41:18.652402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:09.851 [2024-05-15 12:41:18.652459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:09.851 [2024-05-15 12:41:18.652479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:09.851 [2024-05-15 12:41:18.652538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:09.851 [2024-05-15 12:41:18.652594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:09.851 [2024-05-15 12:41:18.652650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:09.851 [2024-05-15 12:41:18.652704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:09.851 [2024-05-15 12:41:18.652759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.851 [2024-05-15 12:41:18.652797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:09.851 [2024-05-15 12:41:18.652816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:09.851 [2024-05-15 12:41:18.652835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.851 [2024-05-15 12:41:18.652853] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:09.851 [2024-05-15 12:41:18.652872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:09.851 [2024-05-15 12:41:18.652892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.851 [2024-05-15 12:41:18.652911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.851 [2024-05-15 12:41:18.652931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:09.851 [2024-05-15 12:41:18.652950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:09.851 [2024-05-15 12:41:18.652969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:09.851 [2024-05-15 12:41:18.652990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:09.851 [2024-05-15 12:41:18.653009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:09.851 [2024-05-15 12:41:18.653027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:09.851 [2024-05-15 12:41:18.653048] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:09.851 [2024-05-15 12:41:18.653079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.851 [2024-05-15 12:41:18.653101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:09.851 [2024-05-15 12:41:18.653121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:09.851 [2024-05-15 12:41:18.653142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:09.851 [2024-05-15 12:41:18.653161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:09.851 [2024-05-15 12:41:18.653181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:09.851 [2024-05-15 12:41:18.653200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:09.851 [2024-05-15 12:41:18.653220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:09.851 [2024-05-15 12:41:18.653239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:09.851 [2024-05-15 12:41:18.653258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:09.851 [2024-05-15 12:41:18.653277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:09.851 [2024-05-15 12:41:18.653297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:09.851 [2024-05-15 12:41:18.653316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:09.851 [2024-05-15 12:41:18.653336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:09.851 [2024-05-15 12:41:18.653356] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:09.851 [2024-05-15 12:41:18.653377] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.851 [2024-05-15 12:41:18.653398] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:09.851 [2024-05-15 12:41:18.653418] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:09.851 [2024-05-15 12:41:18.653438] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:09.851 [2024-05-15 12:41:18.653457] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:09.851 [2024-05-15 12:41:18.653478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.653549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:09.851 [2024-05-15 12:41:18.653571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:20:09.851 [2024-05-15 12:41:18.653590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.682787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.682886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.851 [2024-05-15 12:41:18.682916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.090 ms 00:20:09.851 [2024-05-15 12:41:18.682939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.683174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.683208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:09.851 [2024-05-15 12:41:18.683232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:09.851 [2024-05-15 12:41:18.683253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.737464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.737546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.851 [2024-05-15 12:41:18.737568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.144 ms 00:20:09.851 [2024-05-15 12:41:18.737582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.737719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.737739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.851 [2024-05-15 12:41:18.737752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:09.851 [2024-05-15 12:41:18.737765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.738348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.738382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.851 [2024-05-15 12:41:18.738397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:09.851 [2024-05-15 12:41:18.738409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.851 [2024-05-15 12:41:18.738588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.851 [2024-05-15 12:41:18.738613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.852 [2024-05-15 12:41:18.738627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:09.852 [2024-05-15 12:41:18.738639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.759112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.759162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.852 [2024-05-15 12:41:18.759180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.440 ms 00:20:09.852 [2024-05-15 12:41:18.759193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.775998] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:09.852 [2024-05-15 12:41:18.776042] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:09.852 [2024-05-15 12:41:18.776060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.776072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:09.852 [2024-05-15 12:41:18.776087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:20:09.852 [2024-05-15 12:41:18.776098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.805389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.805448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:09.852 [2024-05-15 12:41:18.805465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.199 ms 00:20:09.852 [2024-05-15 12:41:18.805484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.820865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.820918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:09.852 [2024-05-15 12:41:18.820935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.267 ms 00:20:09.852 [2024-05-15 12:41:18.820946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.836244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.836294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:09.852 [2024-05-15 12:41:18.836310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.210 ms 00:20:09.852 [2024-05-15 12:41:18.836321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.852 [2024-05-15 12:41:18.836839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.852 [2024-05-15 12:41:18.836867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:09.852 [2024-05-15 12:41:18.836882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:09.852 [2024-05-15 12:41:18.836894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.916871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.916925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.110 [2024-05-15 12:41:18.916945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.940 ms 00:20:10.110 [2024-05-15 12:41:18.916959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.929547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.110 [2024-05-15 12:41:18.950914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.950974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.110 [2024-05-15 12:41:18.950994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.809 ms 00:20:10.110 [2024-05-15 12:41:18.951006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.951143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.951163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.110 [2024-05-15 12:41:18.951177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:10.110 [2024-05-15 12:41:18.951190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.951269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.951293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.110 [2024-05-15 12:41:18.951305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:10.110 [2024-05-15 12:41:18.951317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.953402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.953435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:10.110 [2024-05-15 12:41:18.953449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:20:10.110 [2024-05-15 12:41:18.953461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.953524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.953542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.110 [2024-05-15 12:41:18.953555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:10.110 [2024-05-15 12:41:18.953573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.953624] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.110 [2024-05-15 12:41:18.953641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.953653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.110 [2024-05-15 12:41:18.953664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:10.110 [2024-05-15 12:41:18.953675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.985003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.985052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.110 [2024-05-15 12:41:18.985080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.298 ms 00:20:10.110 [2024-05-15 12:41:18.985092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.985227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.110 [2024-05-15 12:41:18.985256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.110 [2024-05-15 12:41:18.985270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:10.110 [2024-05-15 12:41:18.985282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.110 [2024-05-15 12:41:18.986436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.110 [2024-05-15 12:41:18.990485] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.726 ms, result 0 00:20:10.110 [2024-05-15 12:41:18.991361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.110 [2024-05-15 12:41:19.007914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.370  Copying: 4096/4096 [kB] (average 25 MBps)[2024-05-15 12:41:19.171792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.370 [2024-05-15 12:41:19.184406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.184448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.370 [2024-05-15 12:41:19.184468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.370 [2024-05-15 12:41:19.184489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.184543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:10.370 [2024-05-15 12:41:19.188214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.188245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.370 [2024-05-15 12:41:19.188260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:20:10.370 [2024-05-15 12:41:19.188272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.189967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.190020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.370 [2024-05-15 12:41:19.190036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:20:10.370 [2024-05-15 12:41:19.190048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.194248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.194293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.370 [2024-05-15 12:41:19.194308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:20:10.370 [2024-05-15 12:41:19.194320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.201861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.201896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:10.370 [2024-05-15 12:41:19.201911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.480 ms 00:20:10.370 [2024-05-15 12:41:19.201924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.231872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.231947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.370 [2024-05-15 12:41:19.231966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.881 ms 00:20:10.370 [2024-05-15 12:41:19.231977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.249195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.249241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.370 [2024-05-15 12:41:19.249263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.131 ms 00:20:10.370 [2024-05-15 12:41:19.249276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.249462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.249483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.370 [2024-05-15 12:41:19.249526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:10.370 [2024-05-15 12:41:19.249542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.280581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.280637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:10.370 [2024-05-15 12:41:19.280684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.000 ms 00:20:10.370 [2024-05-15 12:41:19.280696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.310981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.311020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:10.370 [2024-05-15 12:41:19.311036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.204 ms 00:20:10.370 [2024-05-15 12:41:19.311047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.341048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.341097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.370 [2024-05-15 12:41:19.341114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.920 ms 00:20:10.370 [2024-05-15 12:41:19.341126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.370978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.370 [2024-05-15 12:41:19.371043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.370 [2024-05-15 12:41:19.371060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.727 ms 00:20:10.370 [2024-05-15 12:41:19.371072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.370 [2024-05-15 12:41:19.371157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.370 [2024-05-15 12:41:19.371199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.370 [2024-05-15 12:41:19.371310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.371989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.371 [2024-05-15 12:41:19.372530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.371 [2024-05-15 12:41:19.372585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:20:10.371 [2024-05-15 12:41:19.372598] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.372 [2024-05-15 12:41:19.372610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.372 [2024-05-15 12:41:19.372621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.372 [2024-05-15 12:41:19.372633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.372 [2024-05-15 12:41:19.372644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.372 [2024-05-15 12:41:19.372656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.372 [2024-05-15 12:41:19.372667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.372 [2024-05-15 12:41:19.372677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.372 [2024-05-15 12:41:19.372687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.372 [2024-05-15 12:41:19.372698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.372 [2024-05-15 12:41:19.372717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.372 [2024-05-15 12:41:19.372729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:20:10.372 [2024-05-15 12:41:19.372741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.630 [2024-05-15 12:41:19.389705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.630 [2024-05-15 12:41:19.389744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.630 [2024-05-15 12:41:19.389760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.936 ms 00:20:10.630 [2024-05-15 12:41:19.389772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.630 [2024-05-15 12:41:19.390085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.630 [2024-05-15 12:41:19.390109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.630 [2024-05-15 12:41:19.390140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:20:10.630 [2024-05-15 12:41:19.390151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.630 [2024-05-15 12:41:19.440565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.630 [2024-05-15 12:41:19.440628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.630 [2024-05-15 12:41:19.440646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.630 [2024-05-15 12:41:19.440659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.630 [2024-05-15 12:41:19.440808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.630 [2024-05-15 12:41:19.440826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.630 [2024-05-15 12:41:19.440839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.630 [2024-05-15 12:41:19.440851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.440916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.440934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.631 [2024-05-15 12:41:19.440947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.440959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.440992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.441007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.631 [2024-05-15 12:41:19.441020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.441031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.544737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.544815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.631 [2024-05-15 12:41:19.544834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.544846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.583997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.631 [2024-05-15 12:41:19.584078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.631 [2024-05-15 12:41:19.584239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.631 [2024-05-15 12:41:19.584340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.631 [2024-05-15 12:41:19.584508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.631 [2024-05-15 12:41:19.584641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.631 [2024-05-15 12:41:19.584732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.584802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.631 [2024-05-15 12:41:19.584818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.631 [2024-05-15 12:41:19.584836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.631 [2024-05-15 12:41:19.584852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.631 [2024-05-15 12:41:19.585030] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.644 ms, result 0 00:20:12.005 00:20:12.005 00:20:12.005 12:41:20 -- ftl/trim.sh@93 -- # svcpid=74421 00:20:12.005 12:41:20 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:12.005 12:41:20 -- ftl/trim.sh@94 -- # waitforlisten 74421 00:20:12.005 12:41:20 -- common/autotest_common.sh@819 -- # '[' -z 74421 ']' 00:20:12.005 12:41:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.005 12:41:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:12.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.005 12:41:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.005 12:41:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:12.005 12:41:20 -- common/autotest_common.sh@10 -- # set +x 00:20:12.005 [2024-05-15 12:41:20.905056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:12.005 [2024-05-15 12:41:20.905204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74421 ] 00:20:12.264 [2024-05-15 12:41:21.078027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.522 [2024-05-15 12:41:21.319128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:12.522 [2024-05-15 12:41:21.319372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.893 12:41:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:13.893 12:41:22 -- common/autotest_common.sh@852 -- # return 0 00:20:13.893 12:41:22 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.893 [2024-05-15 12:41:22.734722] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.893 [2024-05-15 12:41:22.734817] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.152 [2024-05-15 12:41:22.910803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.910881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.152 [2024-05-15 12:41:22.910907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.152 [2024-05-15 12:41:22.910920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.914266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.914316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.152 [2024-05-15 12:41:22.914339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:20:14.152 [2024-05-15 12:41:22.914352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.914547] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.152 [2024-05-15 12:41:22.915535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.152 [2024-05-15 12:41:22.915579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.915596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.152 [2024-05-15 12:41:22.915612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:20:14.152 [2024-05-15 12:41:22.915624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.917609] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.152 [2024-05-15 12:41:22.934249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.934313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.152 [2024-05-15 12:41:22.934336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.647 ms 00:20:14.152 [2024-05-15 12:41:22.934352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.934471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.934510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.152 [2024-05-15 12:41:22.934528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:14.152 [2024-05-15 12:41:22.934543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.943224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.943284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.152 [2024-05-15 12:41:22.943301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.610 ms 00:20:14.152 [2024-05-15 12:41:22.943319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.943452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.943478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.152 [2024-05-15 12:41:22.943513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:14.152 [2024-05-15 12:41:22.943531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.943573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.943593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.152 [2024-05-15 12:41:22.943612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.152 [2024-05-15 12:41:22.943627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.943671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.152 [2024-05-15 12:41:22.948695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.948734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.152 [2024-05-15 12:41:22.948753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.036 ms 00:20:14.152 [2024-05-15 12:41:22.948766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.948860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.948880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.152 [2024-05-15 12:41:22.948896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:14.152 [2024-05-15 12:41:22.948908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.948944] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.152 [2024-05-15 12:41:22.948973] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:14.152 [2024-05-15 12:41:22.949020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.152 [2024-05-15 12:41:22.949043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:14.152 [2024-05-15 12:41:22.949131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:14.152 [2024-05-15 12:41:22.949148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.152 [2024-05-15 12:41:22.949166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:14.152 [2024-05-15 12:41:22.949191] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.152 [2024-05-15 12:41:22.949208] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.152 [2024-05-15 12:41:22.949224] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.152 [2024-05-15 12:41:22.949238] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.152 [2024-05-15 12:41:22.949249] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:14.152 [2024-05-15 12:41:22.949265] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:14.152 [2024-05-15 12:41:22.949277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.949291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.152 [2024-05-15 12:41:22.949303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:20:14.152 [2024-05-15 12:41:22.949317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.949397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.152 [2024-05-15 12:41:22.949416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.152 [2024-05-15 12:41:22.949429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:14.152 [2024-05-15 12:41:22.949446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-05-15 12:41:22.949565] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.152 [2024-05-15 12:41:22.949591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.152 [2024-05-15 12:41:22.949605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.152 [2024-05-15 12:41:22.949620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.152 [2024-05-15 12:41:22.949633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.152 [2024-05-15 12:41:22.949648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.152 [2024-05-15 12:41:22.949660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.152 [2024-05-15 12:41:22.949676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.152 [2024-05-15 12:41:22.949688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.152 [2024-05-15 12:41:22.949701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.152 [2024-05-15 12:41:22.949712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.152 [2024-05-15 12:41:22.949726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.152 [2024-05-15 12:41:22.949737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.152 [2024-05-15 12:41:22.949755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.152 [2024-05-15 12:41:22.949766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:14.152 [2024-05-15 12:41:22.949779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.152 [2024-05-15 12:41:22.949790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.152 [2024-05-15 12:41:22.949803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:14.152 [2024-05-15 12:41:22.949814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.152 [2024-05-15 12:41:22.949827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:14.152 [2024-05-15 12:41:22.949838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:14.153 [2024-05-15 12:41:22.949853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:14.153 [2024-05-15 12:41:22.949866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.153 [2024-05-15 12:41:22.949882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.153 [2024-05-15 12:41:22.949893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.153 [2024-05-15 12:41:22.949907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.153 [2024-05-15 12:41:22.949918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:14.153 [2024-05-15 12:41:22.949931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.153 [2024-05-15 12:41:22.949942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.153 [2024-05-15 12:41:22.949955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.153 [2024-05-15 12:41:22.949981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.153 [2024-05-15 12:41:22.949996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.153 [2024-05-15 12:41:22.950007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:14.153 [2024-05-15 12:41:22.950021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.153 [2024-05-15 12:41:22.950032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.153 [2024-05-15 12:41:22.950045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.153 [2024-05-15 12:41:22.950056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.153 [2024-05-15 12:41:22.950069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.153 [2024-05-15 12:41:22.950080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:14.153 [2024-05-15 12:41:22.950095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.153 [2024-05-15 12:41:22.950106] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.153 [2024-05-15 12:41:22.950120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.153 [2024-05-15 12:41:22.950132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.153 [2024-05-15 12:41:22.950146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.153 [2024-05-15 12:41:22.950161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.153 [2024-05-15 12:41:22.950174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.153 [2024-05-15 12:41:22.950185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.153 [2024-05-15 12:41:22.950199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.153 [2024-05-15 12:41:22.950210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.153 [2024-05-15 12:41:22.950223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.153 [2024-05-15 12:41:22.950235] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.153 [2024-05-15 12:41:22.950252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.153 [2024-05-15 12:41:22.950265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.153 [2024-05-15 12:41:22.950280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:14.153 [2024-05-15 12:41:22.950293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.153 [2024-05-15 12:41:22.950311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:14.153 [2024-05-15 12:41:22.950323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:14.153 [2024-05-15 12:41:22.950337] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:14.153 [2024-05-15 12:41:22.950348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:14.153 [2024-05-15 12:41:22.950362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:14.153 [2024-05-15 12:41:22.950374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:14.153 [2024-05-15 12:41:22.950388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:14.153 [2024-05-15 12:41:22.950400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:14.153 [2024-05-15 12:41:22.950413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:14.153 [2024-05-15 12:41:22.950426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:14.153 [2024-05-15 12:41:22.950439] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.153 [2024-05-15 12:41:22.950453] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.153 [2024-05-15 12:41:22.950468] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.153 [2024-05-15 12:41:22.950479] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.153 [2024-05-15 12:41:22.950510] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.153 [2024-05-15 12:41:22.950526] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.153 [2024-05-15 12:41:22.950544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:22.950564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.153 [2024-05-15 12:41:22.950579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:20:14.153 [2024-05-15 12:41:22.950591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:22.972813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:22.972882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.153 [2024-05-15 12:41:22.972905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.146 ms 00:20:14.153 [2024-05-15 12:41:22.972918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:22.973126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:22.973147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.153 [2024-05-15 12:41:22.973167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:14.153 [2024-05-15 12:41:22.973179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.017036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.017102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.153 [2024-05-15 12:41:23.017126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.821 ms 00:20:14.153 [2024-05-15 12:41:23.017140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.017276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.017296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.153 [2024-05-15 12:41:23.017312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.153 [2024-05-15 12:41:23.017324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.017950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.017986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.153 [2024-05-15 12:41:23.018005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:20:14.153 [2024-05-15 12:41:23.018017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.018186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.018205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.153 [2024-05-15 12:41:23.018221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:14.153 [2024-05-15 12:41:23.018233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.039889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.039954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.153 [2024-05-15 12:41:23.039978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.622 ms 00:20:14.153 [2024-05-15 12:41:23.039991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.057106] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:14.153 [2024-05-15 12:41:23.057158] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.153 [2024-05-15 12:41:23.057180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.057194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.153 [2024-05-15 12:41:23.057210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.993 ms 00:20:14.153 [2024-05-15 12:41:23.057222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.086541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.086610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.153 [2024-05-15 12:41:23.086637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.218 ms 00:20:14.153 [2024-05-15 12:41:23.086651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.102654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.102702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.153 [2024-05-15 12:41:23.102723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.869 ms 00:20:14.153 [2024-05-15 12:41:23.102735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.117912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.117957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.153 [2024-05-15 12:41:23.117980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.082 ms 00:20:14.153 [2024-05-15 12:41:23.117992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-05-15 12:41:23.118540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-05-15 12:41:23.118577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.153 [2024-05-15 12:41:23.118597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:14.153 [2024-05-15 12:41:23.118609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.198470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.198572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.411 [2024-05-15 12:41:23.198598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.821 ms 00:20:14.411 [2024-05-15 12:41:23.198612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.211167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.411 [2024-05-15 12:41:23.232433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.232521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.411 [2024-05-15 12:41:23.232544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.669 ms 00:20:14.411 [2024-05-15 12:41:23.232559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.232692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.232719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.411 [2024-05-15 12:41:23.232734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.411 [2024-05-15 12:41:23.232749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.232821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.232845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.411 [2024-05-15 12:41:23.232858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:14.411 [2024-05-15 12:41:23.232872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.234973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.235017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:14.411 [2024-05-15 12:41:23.235033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:20:14.411 [2024-05-15 12:41:23.235047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.235087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.235108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.411 [2024-05-15 12:41:23.235124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.411 [2024-05-15 12:41:23.235143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.235192] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.411 [2024-05-15 12:41:23.235215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.235228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.411 [2024-05-15 12:41:23.235242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:14.411 [2024-05-15 12:41:23.235254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.267221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.267271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.411 [2024-05-15 12:41:23.267292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.927 ms 00:20:14.411 [2024-05-15 12:41:23.267305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.267438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.411 [2024-05-15 12:41:23.267460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.411 [2024-05-15 12:41:23.267476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:14.411 [2024-05-15 12:41:23.267488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.411 [2024-05-15 12:41:23.268756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.411 [2024-05-15 12:41:23.272771] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.536 ms, result 0 00:20:14.411 [2024-05-15 12:41:23.274331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.411 Some configs were skipped because the RPC state that can call them passed over. 00:20:14.411 12:41:23 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:14.668 [2024-05-15 12:41:23.562524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.668 [2024-05-15 12:41:23.562621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:14.668 [2024-05-15 12:41:23.562644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.007 ms 00:20:14.668 [2024-05-15 12:41:23.562660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.668 [2024-05-15 12:41:23.562728] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.230 ms, result 0 00:20:14.668 true 00:20:14.668 12:41:23 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:14.926 [2024-05-15 12:41:23.827536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.926 [2024-05-15 12:41:23.827616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:14.926 [2024-05-15 12:41:23.827656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.488 ms 00:20:14.926 [2024-05-15 12:41:23.827670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.926 [2024-05-15 12:41:23.827736] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.720 ms, result 0 00:20:14.926 true 00:20:14.926 12:41:23 -- ftl/trim.sh@102 -- # killprocess 74421 00:20:14.926 12:41:23 -- common/autotest_common.sh@926 -- # '[' -z 74421 ']' 00:20:14.926 12:41:23 -- common/autotest_common.sh@930 -- # kill -0 74421 00:20:14.926 12:41:23 -- common/autotest_common.sh@931 -- # uname 00:20:14.926 12:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:14.926 12:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74421 00:20:14.926 killing process with pid 74421 00:20:14.926 12:41:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:14.926 12:41:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:14.926 12:41:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74421' 00:20:14.926 12:41:23 -- common/autotest_common.sh@945 -- # kill 74421 00:20:14.926 12:41:23 -- common/autotest_common.sh@950 -- # wait 74421 00:20:16.301 [2024-05-15 12:41:24.879022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.879131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.301 [2024-05-15 12:41:24.879153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.301 [2024-05-15 12:41:24.879168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.879202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:16.301 [2024-05-15 12:41:24.882821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.882854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.301 [2024-05-15 12:41:24.882878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.591 ms 00:20:16.301 [2024-05-15 12:41:24.882891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.883219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.883254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.301 [2024-05-15 12:41:24.883271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:16.301 [2024-05-15 12:41:24.883284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.887358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.887400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.301 [2024-05-15 12:41:24.887419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:20:16.301 [2024-05-15 12:41:24.887431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.895002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.895040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:16.301 [2024-05-15 12:41:24.895060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:20:16.301 [2024-05-15 12:41:24.895073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.907913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.907968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.301 [2024-05-15 12:41:24.908004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.767 ms 00:20:16.301 [2024-05-15 12:41:24.908017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.916688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.916747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.301 [2024-05-15 12:41:24.916784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.618 ms 00:20:16.301 [2024-05-15 12:41:24.916800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.916975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.916995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.301 [2024-05-15 12:41:24.917012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:16.301 [2024-05-15 12:41:24.917024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.929870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.929927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:16.301 [2024-05-15 12:41:24.929947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.814 ms 00:20:16.301 [2024-05-15 12:41:24.929959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.942083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.942140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:16.301 [2024-05-15 12:41:24.942182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.073 ms 00:20:16.301 [2024-05-15 12:41:24.942194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.954112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.954199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.301 [2024-05-15 12:41:24.954234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.869 ms 00:20:16.301 [2024-05-15 12:41:24.954248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.966459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.301 [2024-05-15 12:41:24.966522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.301 [2024-05-15 12:41:24.966544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.123 ms 00:20:16.301 [2024-05-15 12:41:24.966557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.301 [2024-05-15 12:41:24.966609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.301 [2024-05-15 12:41:24.966635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.966997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.301 [2024-05-15 12:41:24.967248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.967996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.968010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.968023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.968038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.302 [2024-05-15 12:41:24.968059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.302 [2024-05-15 12:41:24.968093] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:20:16.302 [2024-05-15 12:41:24.968106] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.302 [2024-05-15 12:41:24.968124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.302 [2024-05-15 12:41:24.968136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.302 [2024-05-15 12:41:24.968150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.302 [2024-05-15 12:41:24.968161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.302 [2024-05-15 12:41:24.968175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.302 [2024-05-15 12:41:24.968187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.302 [2024-05-15 12:41:24.968200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.302 [2024-05-15 12:41:24.968210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.302 [2024-05-15 12:41:24.968225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.302 [2024-05-15 12:41:24.968237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.302 [2024-05-15 12:41:24.968252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:20:16.302 [2024-05-15 12:41:24.968264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:24.985404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.302 [2024-05-15 12:41:24.985443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.302 [2024-05-15 12:41:24.985484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.093 ms 00:20:16.302 [2024-05-15 12:41:24.985497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:24.985835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.302 [2024-05-15 12:41:24.985869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.302 [2024-05-15 12:41:24.985888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:16.302 [2024-05-15 12:41:24.985901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.045067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.045145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.302 [2024-05-15 12:41:25.045185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.302 [2024-05-15 12:41:25.045198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.045349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.045367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.302 [2024-05-15 12:41:25.045382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.302 [2024-05-15 12:41:25.045394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.045469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.045487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.302 [2024-05-15 12:41:25.045554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.302 [2024-05-15 12:41:25.045572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.045612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.045628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.302 [2024-05-15 12:41:25.045642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.302 [2024-05-15 12:41:25.045655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.156624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.156705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.302 [2024-05-15 12:41:25.156729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.302 [2024-05-15 12:41:25.156741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.302 [2024-05-15 12:41:25.198491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.302 [2024-05-15 12:41:25.198601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.303 [2024-05-15 12:41:25.198644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.198657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.198768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.198792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.303 [2024-05-15 12:41:25.198811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.198823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.198868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.198882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.303 [2024-05-15 12:41:25.198897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.198909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.199041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.199067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.303 [2024-05-15 12:41:25.199088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.199100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.199158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.199176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.303 [2024-05-15 12:41:25.199192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.199203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.199257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.199272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.303 [2024-05-15 12:41:25.199294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.199305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.199368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.303 [2024-05-15 12:41:25.199384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.303 [2024-05-15 12:41:25.199399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.303 [2024-05-15 12:41:25.199411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.303 [2024-05-15 12:41:25.199610] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 320.558 ms, result 0 00:20:17.739 12:41:26 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.739 [2024-05-15 12:41:26.507034] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:17.739 [2024-05-15 12:41:26.507284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74498 ] 00:20:17.739 [2024-05-15 12:41:26.684350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.997 [2024-05-15 12:41:26.921455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.563 [2024-05-15 12:41:27.271679] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.563 [2024-05-15 12:41:27.271765] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.563 [2024-05-15 12:41:27.429242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.429315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.563 [2024-05-15 12:41:27.429353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:18.563 [2024-05-15 12:41:27.429371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.432780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.432826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.563 [2024-05-15 12:41:27.432844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:20:18.563 [2024-05-15 12:41:27.432856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.432981] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.563 [2024-05-15 12:41:27.433968] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.563 [2024-05-15 12:41:27.434011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.434032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.563 [2024-05-15 12:41:27.434045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:20:18.563 [2024-05-15 12:41:27.434057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.436132] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.563 [2024-05-15 12:41:27.452916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.452976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.563 [2024-05-15 12:41:27.453010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.786 ms 00:20:18.563 [2024-05-15 12:41:27.453023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.453143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.453166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.563 [2024-05-15 12:41:27.453184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:18.563 [2024-05-15 12:41:27.453196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.461892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.461949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.563 [2024-05-15 12:41:27.461966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.632 ms 00:20:18.563 [2024-05-15 12:41:27.461979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.462157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.462189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.563 [2024-05-15 12:41:27.462204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:18.563 [2024-05-15 12:41:27.462216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.462261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.462278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.563 [2024-05-15 12:41:27.462291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:18.563 [2024-05-15 12:41:27.462303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.462345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.563 [2024-05-15 12:41:27.467375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.467417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.563 [2024-05-15 12:41:27.467450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:20:18.563 [2024-05-15 12:41:27.467462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.467583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.467610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.563 [2024-05-15 12:41:27.467623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:18.563 [2024-05-15 12:41:27.467635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.467670] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.563 [2024-05-15 12:41:27.467702] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:18.563 [2024-05-15 12:41:27.467746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.563 [2024-05-15 12:41:27.467768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:18.563 [2024-05-15 12:41:27.467855] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:18.563 [2024-05-15 12:41:27.467872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.563 [2024-05-15 12:41:27.467888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:18.563 [2024-05-15 12:41:27.467903] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.563 [2024-05-15 12:41:27.467917] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.563 [2024-05-15 12:41:27.467930] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.563 [2024-05-15 12:41:27.467942] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.563 [2024-05-15 12:41:27.467953] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:18.563 [2024-05-15 12:41:27.467965] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:18.563 [2024-05-15 12:41:27.467977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.467990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.563 [2024-05-15 12:41:27.468007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:18.563 [2024-05-15 12:41:27.468019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.468101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.563 [2024-05-15 12:41:27.468135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.563 [2024-05-15 12:41:27.468148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:18.563 [2024-05-15 12:41:27.468160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.563 [2024-05-15 12:41:27.468251] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.563 [2024-05-15 12:41:27.468270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.563 [2024-05-15 12:41:27.468283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.563 [2024-05-15 12:41:27.468301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.563 [2024-05-15 12:41:27.468313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.563 [2024-05-15 12:41:27.468324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.563 [2024-05-15 12:41:27.468335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.563 [2024-05-15 12:41:27.468346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.563 [2024-05-15 12:41:27.468356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.563 [2024-05-15 12:41:27.468367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.563 [2024-05-15 12:41:27.468378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.563 [2024-05-15 12:41:27.468389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.563 [2024-05-15 12:41:27.468399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.563 [2024-05-15 12:41:27.468410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.563 [2024-05-15 12:41:27.468421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:18.563 [2024-05-15 12:41:27.468431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.563 [2024-05-15 12:41:27.468442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.563 [2024-05-15 12:41:27.468455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:18.563 [2024-05-15 12:41:27.468466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.563 [2024-05-15 12:41:27.468686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:18.563 [2024-05-15 12:41:27.468749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:18.563 [2024-05-15 12:41:27.468872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:18.563 [2024-05-15 12:41:27.468925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.563 [2024-05-15 12:41:27.468966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.563 [2024-05-15 12:41:27.469074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.563 [2024-05-15 12:41:27.469093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.563 [2024-05-15 12:41:27.469105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:18.563 [2024-05-15 12:41:27.469116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.563 [2024-05-15 12:41:27.469127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.563 [2024-05-15 12:41:27.469138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.563 [2024-05-15 12:41:27.469148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.563 [2024-05-15 12:41:27.469159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.564 [2024-05-15 12:41:27.469169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:18.564 [2024-05-15 12:41:27.469180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.564 [2024-05-15 12:41:27.469191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.564 [2024-05-15 12:41:27.469201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.564 [2024-05-15 12:41:27.469212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.564 [2024-05-15 12:41:27.469223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.564 [2024-05-15 12:41:27.469234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:18.564 [2024-05-15 12:41:27.469244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.564 [2024-05-15 12:41:27.469254] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.564 [2024-05-15 12:41:27.469267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.564 [2024-05-15 12:41:27.469279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.564 [2024-05-15 12:41:27.469290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.564 [2024-05-15 12:41:27.469302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.564 [2024-05-15 12:41:27.469313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.564 [2024-05-15 12:41:27.469324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.564 [2024-05-15 12:41:27.469335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.564 [2024-05-15 12:41:27.469346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.564 [2024-05-15 12:41:27.469360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.564 [2024-05-15 12:41:27.469373] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.564 [2024-05-15 12:41:27.469396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.564 [2024-05-15 12:41:27.469410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.564 [2024-05-15 12:41:27.469422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:18.564 [2024-05-15 12:41:27.469434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.564 [2024-05-15 12:41:27.469446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:18.564 [2024-05-15 12:41:27.469458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:18.564 [2024-05-15 12:41:27.469470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:18.564 [2024-05-15 12:41:27.469483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:18.564 [2024-05-15 12:41:27.469527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:18.564 [2024-05-15 12:41:27.469542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:18.564 [2024-05-15 12:41:27.469554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:18.564 [2024-05-15 12:41:27.469566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:18.564 [2024-05-15 12:41:27.469579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:18.564 [2024-05-15 12:41:27.469592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:18.564 [2024-05-15 12:41:27.469604] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.564 [2024-05-15 12:41:27.469617] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.564 [2024-05-15 12:41:27.469630] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.564 [2024-05-15 12:41:27.469643] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.564 [2024-05-15 12:41:27.469656] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.564 [2024-05-15 12:41:27.469667] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.564 [2024-05-15 12:41:27.469682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.469703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.564 [2024-05-15 12:41:27.469716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:20:18.564 [2024-05-15 12:41:27.469728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.491986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.492053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.564 [2024-05-15 12:41:27.492073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:20:18.564 [2024-05-15 12:41:27.492086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.492280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.492301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.564 [2024-05-15 12:41:27.492315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:18.564 [2024-05-15 12:41:27.492327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.545634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.545706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.564 [2024-05-15 12:41:27.545739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.270 ms 00:20:18.564 [2024-05-15 12:41:27.545752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.545894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.545914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.564 [2024-05-15 12:41:27.545929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.564 [2024-05-15 12:41:27.545941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.546557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.546578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.564 [2024-05-15 12:41:27.546592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:20:18.564 [2024-05-15 12:41:27.546604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.546765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.546786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.564 [2024-05-15 12:41:27.546799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:18.564 [2024-05-15 12:41:27.546811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.564 [2024-05-15 12:41:27.567194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.564 [2024-05-15 12:41:27.567250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.564 [2024-05-15 12:41:27.567271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.349 ms 00:20:18.564 [2024-05-15 12:41:27.567284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.584015] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:18.823 [2024-05-15 12:41:27.584068] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.823 [2024-05-15 12:41:27.584087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.584101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.823 [2024-05-15 12:41:27.584115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.622 ms 00:20:18.823 [2024-05-15 12:41:27.584126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.613795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.613867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.823 [2024-05-15 12:41:27.613888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.561 ms 00:20:18.823 [2024-05-15 12:41:27.613909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.629464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.629542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.823 [2024-05-15 12:41:27.629562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.433 ms 00:20:18.823 [2024-05-15 12:41:27.629573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.644675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.644729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.823 [2024-05-15 12:41:27.644761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.002 ms 00:20:18.823 [2024-05-15 12:41:27.644772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.645256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.645285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.823 [2024-05-15 12:41:27.645300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:18.823 [2024-05-15 12:41:27.645311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.723408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.723485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.823 [2024-05-15 12:41:27.723521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.044 ms 00:20:18.823 [2024-05-15 12:41:27.723535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.736488] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.823 [2024-05-15 12:41:27.758238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.758311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.823 [2024-05-15 12:41:27.758349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.551 ms 00:20:18.823 [2024-05-15 12:41:27.758361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.758539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.758563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.823 [2024-05-15 12:41:27.758577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:18.823 [2024-05-15 12:41:27.758590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.758673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.758697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.823 [2024-05-15 12:41:27.758710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:18.823 [2024-05-15 12:41:27.758722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.760821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.760861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:18.823 [2024-05-15 12:41:27.760877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:20:18.823 [2024-05-15 12:41:27.760889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.760932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.760948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.823 [2024-05-15 12:41:27.760961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.823 [2024-05-15 12:41:27.760979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.761029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.823 [2024-05-15 12:41:27.761047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.761059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.823 [2024-05-15 12:41:27.761072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:18.823 [2024-05-15 12:41:27.761084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.792637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.792836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.823 [2024-05-15 12:41:27.792967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.523 ms 00:20:18.823 [2024-05-15 12:41:27.793019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.793265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.823 [2024-05-15 12:41:27.793409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.823 [2024-05-15 12:41:27.793561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:18.823 [2024-05-15 12:41:27.793675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.823 [2024-05-15 12:41:27.794941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.823 [2024-05-15 12:41:27.799120] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.229 ms, result 0 00:20:18.823 [2024-05-15 12:41:27.800075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.823 [2024-05-15 12:41:27.816648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.582  Copying: 28/256 [MB] (28 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 77/256 [MB] (24 MBps) Copying: 101/256 [MB] (24 MBps) Copying: 127/256 [MB] (25 MBps) Copying: 151/256 [MB] (23 MBps) Copying: 175/256 [MB] (24 MBps) Copying: 199/256 [MB] (24 MBps) Copying: 223/256 [MB] (24 MBps) Copying: 248/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-05-15 12:41:38.447203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.582 [2024-05-15 12:41:38.466690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.466759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.582 [2024-05-15 12:41:38.466813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.582 [2024-05-15 12:41:38.466840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.466879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:29.582 [2024-05-15 12:41:38.470512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.470545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.582 [2024-05-15 12:41:38.470576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:20:29.582 [2024-05-15 12:41:38.470587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.470946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.470969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.582 [2024-05-15 12:41:38.470983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:29.582 [2024-05-15 12:41:38.470995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.474702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.474742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.582 [2024-05-15 12:41:38.474773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:20:29.582 [2024-05-15 12:41:38.474785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.482325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.482360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:29.582 [2024-05-15 12:41:38.482391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.500 ms 00:20:29.582 [2024-05-15 12:41:38.482402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.512224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.512274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.582 [2024-05-15 12:41:38.512307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.743 ms 00:20:29.582 [2024-05-15 12:41:38.512318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.529489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.529594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.582 [2024-05-15 12:41:38.529637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.085 ms 00:20:29.582 [2024-05-15 12:41:38.529650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.529865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.529888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.582 [2024-05-15 12:41:38.529903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:29.582 [2024-05-15 12:41:38.529916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.559768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.559833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:29.582 [2024-05-15 12:41:38.559883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.824 ms 00:20:29.582 [2024-05-15 12:41:38.559895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-05-15 12:41:38.588795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-05-15 12:41:38.588844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:29.582 [2024-05-15 12:41:38.588878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.811 ms 00:20:29.582 [2024-05-15 12:41:38.588890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.841 [2024-05-15 12:41:38.617475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.841 [2024-05-15 12:41:38.617569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.841 [2024-05-15 12:41:38.617590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.501 ms 00:20:29.841 [2024-05-15 12:41:38.617602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.841 [2024-05-15 12:41:38.646952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.841 [2024-05-15 12:41:38.647001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.841 [2024-05-15 12:41:38.647035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.199 ms 00:20:29.841 [2024-05-15 12:41:38.647046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.841 [2024-05-15 12:41:38.647126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.841 [2024-05-15 12:41:38.647152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.841 [2024-05-15 12:41:38.647600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.647997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.842 [2024-05-15 12:41:38.648388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.843 [2024-05-15 12:41:38.648474] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.843 [2024-05-15 12:41:38.648514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3c28e999-af9a-4c95-b334-96e908a03298 00:20:29.843 [2024-05-15 12:41:38.648528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.843 [2024-05-15 12:41:38.648540] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.843 [2024-05-15 12:41:38.648552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.843 [2024-05-15 12:41:38.648564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.843 [2024-05-15 12:41:38.648575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.843 [2024-05-15 12:41:38.648587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.843 [2024-05-15 12:41:38.648599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.843 [2024-05-15 12:41:38.648610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.843 [2024-05-15 12:41:38.648620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.843 [2024-05-15 12:41:38.648632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.843 [2024-05-15 12:41:38.648650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.843 [2024-05-15 12:41:38.648663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:20:29.843 [2024-05-15 12:41:38.648674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.664797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.843 [2024-05-15 12:41:38.664837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.843 [2024-05-15 12:41:38.664871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.094 ms 00:20:29.843 [2024-05-15 12:41:38.664882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.665170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.843 [2024-05-15 12:41:38.665195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.843 [2024-05-15 12:41:38.665210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:29.843 [2024-05-15 12:41:38.665221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.713776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.843 [2024-05-15 12:41:38.713875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.843 [2024-05-15 12:41:38.713911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.843 [2024-05-15 12:41:38.713923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.714087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.843 [2024-05-15 12:41:38.714106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.843 [2024-05-15 12:41:38.714119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.843 [2024-05-15 12:41:38.714131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.714199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.843 [2024-05-15 12:41:38.714217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.843 [2024-05-15 12:41:38.714230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.843 [2024-05-15 12:41:38.714242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.714270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.843 [2024-05-15 12:41:38.714291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.843 [2024-05-15 12:41:38.714303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.843 [2024-05-15 12:41:38.714315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.843 [2024-05-15 12:41:38.816697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.843 [2024-05-15 12:41:38.816769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.843 [2024-05-15 12:41:38.816819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.843 [2024-05-15 12:41:38.816831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.857657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.857730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.101 [2024-05-15 12:41:38.857750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.857762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.857874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.857895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.101 [2024-05-15 12:41:38.857908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.857920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.857985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.858000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.101 [2024-05-15 12:41:38.858022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.858033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.858171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.858189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.101 [2024-05-15 12:41:38.858201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.858212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.858267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.858285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.101 [2024-05-15 12:41:38.858297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.858315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.858364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.858379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.101 [2024-05-15 12:41:38.858391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.858402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.858458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.101 [2024-05-15 12:41:38.858475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.101 [2024-05-15 12:41:38.858510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.101 [2024-05-15 12:41:38.858526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.101 [2024-05-15 12:41:38.858745] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.088 ms, result 0 00:20:31.033 00:20:31.033 00:20:31.291 12:41:40 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:31.858 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:31.858 12:41:40 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:31.858 12:41:40 -- ftl/trim.sh@109 -- # fio_kill 00:20:31.858 12:41:40 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:31.858 12:41:40 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.858 12:41:40 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:31.858 12:41:40 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:31.858 12:41:40 -- ftl/trim.sh@20 -- # killprocess 74421 00:20:31.858 12:41:40 -- common/autotest_common.sh@926 -- # '[' -z 74421 ']' 00:20:31.858 12:41:40 -- common/autotest_common.sh@930 -- # kill -0 74421 00:20:31.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74421) - No such process 00:20:31.858 Process with pid 74421 is not found 00:20:31.858 12:41:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74421 is not found' 00:20:31.858 ************************************ 00:20:31.858 END TEST ftl_trim 00:20:31.858 ************************************ 00:20:31.858 00:20:31.858 real 1m12.807s 00:20:31.858 user 1m39.931s 00:20:31.858 sys 0m7.812s 00:20:31.858 12:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.858 12:41:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.858 12:41:40 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:31.859 12:41:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:31.859 12:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:31.859 12:41:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.859 ************************************ 00:20:31.859 START TEST ftl_restore 00:20:31.859 ************************************ 00:20:31.859 12:41:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:31.859 * Looking for test storage... 00:20:31.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.859 12:41:40 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:31.859 12:41:40 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:31.859 12:41:40 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.859 12:41:40 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.859 12:41:40 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:31.859 12:41:40 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:31.859 12:41:40 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.859 12:41:40 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:31.859 12:41:40 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:31.859 12:41:40 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.859 12:41:40 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.859 12:41:40 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:31.859 12:41:40 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:31.859 12:41:40 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:31.859 12:41:40 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:31.859 12:41:40 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:31.859 12:41:40 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:31.859 12:41:40 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.859 12:41:40 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.859 12:41:40 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:31.859 12:41:40 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:31.859 12:41:40 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.859 12:41:40 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.859 12:41:40 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.859 12:41:40 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.859 12:41:40 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:31.859 12:41:40 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:31.859 12:41:40 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.859 12:41:40 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.859 12:41:40 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.859 12:41:40 -- ftl/restore.sh@13 -- # mktemp -d 00:20:31.859 12:41:40 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.WAfHCE6l4f 00:20:31.859 12:41:40 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:31.859 12:41:40 -- ftl/restore.sh@16 -- # case $opt in 00:20:31.859 12:41:40 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:20:31.859 12:41:40 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:31.859 12:41:40 -- ftl/restore.sh@23 -- # shift 2 00:20:31.859 12:41:40 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:20:31.859 12:41:40 -- ftl/restore.sh@25 -- # timeout=240 00:20:31.859 12:41:40 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:31.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.859 12:41:40 -- ftl/restore.sh@39 -- # svcpid=74698 00:20:31.859 12:41:40 -- ftl/restore.sh@41 -- # waitforlisten 74698 00:20:31.859 12:41:40 -- common/autotest_common.sh@819 -- # '[' -z 74698 ']' 00:20:31.859 12:41:40 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.859 12:41:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.859 12:41:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:31.859 12:41:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.859 12:41:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:31.859 12:41:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.117 [2024-05-15 12:41:40.965130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:32.117 [2024-05-15 12:41:40.965557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74698 ] 00:20:32.387 [2024-05-15 12:41:41.139396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.387 [2024-05-15 12:41:41.380296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.387 [2024-05-15 12:41:41.380777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.760 12:41:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.760 12:41:42 -- common/autotest_common.sh@852 -- # return 0 00:20:33.760 12:41:42 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:20:33.760 12:41:42 -- ftl/common.sh@54 -- # local name=nvme0 00:20:33.760 12:41:42 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:20:33.760 12:41:42 -- ftl/common.sh@56 -- # local size=103424 00:20:33.760 12:41:42 -- ftl/common.sh@59 -- # local base_bdev 00:20:33.761 12:41:42 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:20:34.017 12:41:42 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:34.017 12:41:42 -- ftl/common.sh@62 -- # local base_size 00:20:34.017 12:41:42 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:34.017 12:41:42 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:20:34.017 12:41:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:34.017 12:41:42 -- common/autotest_common.sh@1359 -- # local bs 00:20:34.017 12:41:42 -- common/autotest_common.sh@1360 -- # local nb 00:20:34.017 12:41:42 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:34.274 12:41:43 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:34.274 { 00:20:34.274 "name": "nvme0n1", 00:20:34.274 "aliases": [ 00:20:34.274 "d857d42a-72b3-4806-a287-e46e1a87cc31" 00:20:34.274 ], 00:20:34.274 "product_name": "NVMe disk", 00:20:34.274 "block_size": 4096, 00:20:34.274 "num_blocks": 1310720, 00:20:34.274 "uuid": "d857d42a-72b3-4806-a287-e46e1a87cc31", 00:20:34.274 "assigned_rate_limits": { 00:20:34.274 "rw_ios_per_sec": 0, 00:20:34.274 "rw_mbytes_per_sec": 0, 00:20:34.274 "r_mbytes_per_sec": 0, 00:20:34.274 "w_mbytes_per_sec": 0 00:20:34.274 }, 00:20:34.274 "claimed": true, 00:20:34.274 "claim_type": "read_many_write_one", 00:20:34.274 "zoned": false, 00:20:34.274 "supported_io_types": { 00:20:34.274 "read": true, 00:20:34.274 "write": true, 00:20:34.274 "unmap": true, 00:20:34.274 "write_zeroes": true, 00:20:34.274 "flush": true, 00:20:34.274 "reset": true, 00:20:34.274 "compare": true, 00:20:34.274 "compare_and_write": false, 00:20:34.274 "abort": true, 00:20:34.274 "nvme_admin": true, 00:20:34.274 "nvme_io": true 00:20:34.274 }, 00:20:34.274 "driver_specific": { 00:20:34.274 "nvme": [ 00:20:34.274 { 00:20:34.274 "pci_address": "0000:00:07.0", 00:20:34.274 "trid": { 00:20:34.274 "trtype": "PCIe", 00:20:34.274 "traddr": "0000:00:07.0" 00:20:34.275 }, 00:20:34.275 "ctrlr_data": { 00:20:34.275 "cntlid": 0, 00:20:34.275 "vendor_id": "0x1b36", 00:20:34.275 "model_number": "QEMU NVMe Ctrl", 00:20:34.275 "serial_number": "12341", 00:20:34.275 "firmware_revision": "8.0.0", 00:20:34.275 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:34.275 "oacs": { 00:20:34.275 "security": 0, 00:20:34.275 "format": 1, 00:20:34.275 "firmware": 0, 00:20:34.275 "ns_manage": 1 00:20:34.275 }, 00:20:34.275 "multi_ctrlr": false, 00:20:34.275 "ana_reporting": false 00:20:34.275 }, 00:20:34.275 "vs": { 00:20:34.275 "nvme_version": "1.4" 00:20:34.275 }, 00:20:34.275 "ns_data": { 00:20:34.275 "id": 1, 00:20:34.275 "can_share": false 00:20:34.275 } 00:20:34.275 } 00:20:34.275 ], 00:20:34.275 "mp_policy": "active_passive" 00:20:34.275 } 00:20:34.275 } 00:20:34.275 ]' 00:20:34.275 12:41:43 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:34.275 12:41:43 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:34.275 12:41:43 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:34.533 12:41:43 -- common/autotest_common.sh@1363 -- # nb=1310720 00:20:34.533 12:41:43 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:20:34.533 12:41:43 -- common/autotest_common.sh@1367 -- # echo 5120 00:20:34.533 12:41:43 -- ftl/common.sh@63 -- # base_size=5120 00:20:34.533 12:41:43 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:34.533 12:41:43 -- ftl/common.sh@67 -- # clear_lvols 00:20:34.533 12:41:43 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:34.533 12:41:43 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:34.533 12:41:43 -- ftl/common.sh@28 -- # stores=ae63c405-74d5-4a4d-bf8c-9157418ed66b 00:20:34.533 12:41:43 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:34.533 12:41:43 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae63c405-74d5-4a4d-bf8c-9157418ed66b 00:20:34.791 12:41:43 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:35.048 12:41:44 -- ftl/common.sh@68 -- # lvs=532cb311-8699-4381-ad64-f9dce0252a20 00:20:35.048 12:41:44 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 532cb311-8699-4381-ad64-f9dce0252a20 00:20:35.306 12:41:44 -- ftl/restore.sh@43 -- # split_bdev=aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.306 12:41:44 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:20:35.306 12:41:44 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.306 12:41:44 -- ftl/common.sh@35 -- # local name=nvc0 00:20:35.306 12:41:44 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:20:35.306 12:41:44 -- ftl/common.sh@37 -- # local base_bdev=aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.306 12:41:44 -- ftl/common.sh@38 -- # local cache_size= 00:20:35.306 12:41:44 -- ftl/common.sh@41 -- # get_bdev_size aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.306 12:41:44 -- common/autotest_common.sh@1357 -- # local bdev_name=aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.306 12:41:44 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:35.306 12:41:44 -- common/autotest_common.sh@1359 -- # local bs 00:20:35.306 12:41:44 -- common/autotest_common.sh@1360 -- # local nb 00:20:35.306 12:41:44 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:35.581 12:41:44 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:35.581 { 00:20:35.581 "name": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:35.581 "aliases": [ 00:20:35.581 "lvs/nvme0n1p0" 00:20:35.581 ], 00:20:35.581 "product_name": "Logical Volume", 00:20:35.581 "block_size": 4096, 00:20:35.581 "num_blocks": 26476544, 00:20:35.581 "uuid": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:35.581 "assigned_rate_limits": { 00:20:35.581 "rw_ios_per_sec": 0, 00:20:35.581 "rw_mbytes_per_sec": 0, 00:20:35.581 "r_mbytes_per_sec": 0, 00:20:35.581 "w_mbytes_per_sec": 0 00:20:35.581 }, 00:20:35.581 "claimed": false, 00:20:35.581 "zoned": false, 00:20:35.581 "supported_io_types": { 00:20:35.581 "read": true, 00:20:35.581 "write": true, 00:20:35.581 "unmap": true, 00:20:35.581 "write_zeroes": true, 00:20:35.581 "flush": false, 00:20:35.581 "reset": true, 00:20:35.581 "compare": false, 00:20:35.581 "compare_and_write": false, 00:20:35.581 "abort": false, 00:20:35.581 "nvme_admin": false, 00:20:35.581 "nvme_io": false 00:20:35.581 }, 00:20:35.581 "driver_specific": { 00:20:35.581 "lvol": { 00:20:35.581 "lvol_store_uuid": "532cb311-8699-4381-ad64-f9dce0252a20", 00:20:35.581 "base_bdev": "nvme0n1", 00:20:35.581 "thin_provision": true, 00:20:35.581 "snapshot": false, 00:20:35.581 "clone": false, 00:20:35.581 "esnap_clone": false 00:20:35.581 } 00:20:35.581 } 00:20:35.581 } 00:20:35.581 ]' 00:20:35.581 12:41:44 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:35.875 12:41:44 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:35.875 12:41:44 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:35.875 12:41:44 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:35.875 12:41:44 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:35.875 12:41:44 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:35.875 12:41:44 -- ftl/common.sh@41 -- # local base_size=5171 00:20:35.875 12:41:44 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:35.875 12:41:44 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:20:36.132 12:41:44 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:36.132 12:41:44 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:36.132 12:41:44 -- ftl/common.sh@48 -- # get_bdev_size aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.132 12:41:44 -- common/autotest_common.sh@1357 -- # local bdev_name=aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.132 12:41:44 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:36.132 12:41:44 -- common/autotest_common.sh@1359 -- # local bs 00:20:36.132 12:41:44 -- common/autotest_common.sh@1360 -- # local nb 00:20:36.132 12:41:44 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.390 12:41:45 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:36.390 { 00:20:36.390 "name": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:36.390 "aliases": [ 00:20:36.390 "lvs/nvme0n1p0" 00:20:36.390 ], 00:20:36.390 "product_name": "Logical Volume", 00:20:36.390 "block_size": 4096, 00:20:36.390 "num_blocks": 26476544, 00:20:36.390 "uuid": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:36.390 "assigned_rate_limits": { 00:20:36.390 "rw_ios_per_sec": 0, 00:20:36.390 "rw_mbytes_per_sec": 0, 00:20:36.390 "r_mbytes_per_sec": 0, 00:20:36.390 "w_mbytes_per_sec": 0 00:20:36.390 }, 00:20:36.390 "claimed": false, 00:20:36.390 "zoned": false, 00:20:36.390 "supported_io_types": { 00:20:36.390 "read": true, 00:20:36.390 "write": true, 00:20:36.390 "unmap": true, 00:20:36.390 "write_zeroes": true, 00:20:36.390 "flush": false, 00:20:36.390 "reset": true, 00:20:36.390 "compare": false, 00:20:36.390 "compare_and_write": false, 00:20:36.390 "abort": false, 00:20:36.390 "nvme_admin": false, 00:20:36.390 "nvme_io": false 00:20:36.390 }, 00:20:36.390 "driver_specific": { 00:20:36.390 "lvol": { 00:20:36.390 "lvol_store_uuid": "532cb311-8699-4381-ad64-f9dce0252a20", 00:20:36.390 "base_bdev": "nvme0n1", 00:20:36.390 "thin_provision": true, 00:20:36.390 "snapshot": false, 00:20:36.390 "clone": false, 00:20:36.390 "esnap_clone": false 00:20:36.390 } 00:20:36.390 } 00:20:36.390 } 00:20:36.390 ]' 00:20:36.390 12:41:45 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:36.390 12:41:45 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:36.390 12:41:45 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:36.390 12:41:45 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:36.390 12:41:45 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:36.390 12:41:45 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:36.390 12:41:45 -- ftl/common.sh@48 -- # cache_size=5171 00:20:36.390 12:41:45 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:36.648 12:41:45 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:36.648 12:41:45 -- ftl/restore.sh@48 -- # get_bdev_size aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.648 12:41:45 -- common/autotest_common.sh@1357 -- # local bdev_name=aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.648 12:41:45 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:36.648 12:41:45 -- common/autotest_common.sh@1359 -- # local bs 00:20:36.648 12:41:45 -- common/autotest_common.sh@1360 -- # local nb 00:20:36.648 12:41:45 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 00:20:36.906 12:41:45 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:36.906 { 00:20:36.906 "name": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:36.906 "aliases": [ 00:20:36.906 "lvs/nvme0n1p0" 00:20:36.906 ], 00:20:36.906 "product_name": "Logical Volume", 00:20:36.906 "block_size": 4096, 00:20:36.906 "num_blocks": 26476544, 00:20:36.906 "uuid": "aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9", 00:20:36.906 "assigned_rate_limits": { 00:20:36.906 "rw_ios_per_sec": 0, 00:20:36.906 "rw_mbytes_per_sec": 0, 00:20:36.906 "r_mbytes_per_sec": 0, 00:20:36.906 "w_mbytes_per_sec": 0 00:20:36.906 }, 00:20:36.906 "claimed": false, 00:20:36.906 "zoned": false, 00:20:36.906 "supported_io_types": { 00:20:36.906 "read": true, 00:20:36.906 "write": true, 00:20:36.906 "unmap": true, 00:20:36.906 "write_zeroes": true, 00:20:36.906 "flush": false, 00:20:36.906 "reset": true, 00:20:36.906 "compare": false, 00:20:36.906 "compare_and_write": false, 00:20:36.906 "abort": false, 00:20:36.906 "nvme_admin": false, 00:20:36.906 "nvme_io": false 00:20:36.906 }, 00:20:36.906 "driver_specific": { 00:20:36.906 "lvol": { 00:20:36.906 "lvol_store_uuid": "532cb311-8699-4381-ad64-f9dce0252a20", 00:20:36.906 "base_bdev": "nvme0n1", 00:20:36.906 "thin_provision": true, 00:20:36.906 "snapshot": false, 00:20:36.906 "clone": false, 00:20:36.906 "esnap_clone": false 00:20:36.906 } 00:20:36.906 } 00:20:36.906 } 00:20:36.906 ]' 00:20:36.906 12:41:45 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:36.906 12:41:45 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:36.906 12:41:45 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:37.165 12:41:45 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:37.165 12:41:45 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:37.165 12:41:45 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:37.165 12:41:45 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:37.165 12:41:45 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 --l2p_dram_limit 10' 00:20:37.165 12:41:45 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:37.165 12:41:45 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:37.165 12:41:45 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:37.165 12:41:45 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:37.165 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:37.165 12:41:45 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aea3ffcd-f4db-4149-8a50-4eb3d0fa6ba9 --l2p_dram_limit 10 -c nvc0n1p0 00:20:37.165 [2024-05-15 12:41:46.156257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.156313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.165 [2024-05-15 12:41:46.156339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:37.165 [2024-05-15 12:41:46.156353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.156437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.156456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.165 [2024-05-15 12:41:46.156472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:37.165 [2024-05-15 12:41:46.156485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.156756] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.165 [2024-05-15 12:41:46.157832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.165 [2024-05-15 12:41:46.157883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.157899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.165 [2024-05-15 12:41:46.157915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:20:37.165 [2024-05-15 12:41:46.157927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.158152] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 04fa0f24-bd68-4724-a523-fdd66996a60c 00:20:37.165 [2024-05-15 12:41:46.159922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.159957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:37.165 [2024-05-15 12:41:46.159973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:37.165 [2024-05-15 12:41:46.159987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.169409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.169457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.165 [2024-05-15 12:41:46.169478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.342 ms 00:20:37.165 [2024-05-15 12:41:46.169517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.169659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.169682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.165 [2024-05-15 12:41:46.169696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:37.165 [2024-05-15 12:41:46.169716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.169798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.165 [2024-05-15 12:41:46.169826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.165 [2024-05-15 12:41:46.169841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:37.165 [2024-05-15 12:41:46.169856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.165 [2024-05-15 12:41:46.169904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.424 [2024-05-15 12:41:46.175073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.424 [2024-05-15 12:41:46.175111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.424 [2024-05-15 12:41:46.175129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.186 ms 00:20:37.424 [2024-05-15 12:41:46.175142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.424 [2024-05-15 12:41:46.175192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.424 [2024-05-15 12:41:46.175208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.424 [2024-05-15 12:41:46.175223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:37.424 [2024-05-15 12:41:46.175235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.424 [2024-05-15 12:41:46.175282] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:37.424 [2024-05-15 12:41:46.175418] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:37.424 [2024-05-15 12:41:46.175443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.424 [2024-05-15 12:41:46.175459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:37.424 [2024-05-15 12:41:46.175477] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.424 [2024-05-15 12:41:46.175508] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.424 [2024-05-15 12:41:46.175527] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:37.424 [2024-05-15 12:41:46.175539] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.425 [2024-05-15 12:41:46.175700] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:37.425 [2024-05-15 12:41:46.175719] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:37.425 [2024-05-15 12:41:46.175735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.425 [2024-05-15 12:41:46.175747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.425 [2024-05-15 12:41:46.175776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:20:37.425 [2024-05-15 12:41:46.175788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.425 [2024-05-15 12:41:46.175889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.425 [2024-05-15 12:41:46.175904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.425 [2024-05-15 12:41:46.175919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:37.425 [2024-05-15 12:41:46.175931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.425 [2024-05-15 12:41:46.176021] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.425 [2024-05-15 12:41:46.176039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.425 [2024-05-15 12:41:46.176054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.425 [2024-05-15 12:41:46.176100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.425 [2024-05-15 12:41:46.176139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.425 [2024-05-15 12:41:46.176163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.425 [2024-05-15 12:41:46.176174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:37.425 [2024-05-15 12:41:46.176189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.425 [2024-05-15 12:41:46.176200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.425 [2024-05-15 12:41:46.176213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:37.425 [2024-05-15 12:41:46.176224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.425 [2024-05-15 12:41:46.176250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:37.425 [2024-05-15 12:41:46.176262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:37.425 [2024-05-15 12:41:46.176291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:37.425 [2024-05-15 12:41:46.176301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.425 [2024-05-15 12:41:46.176324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.425 [2024-05-15 12:41:46.176361] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.425 [2024-05-15 12:41:46.176396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.425 [2024-05-15 12:41:46.176434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.425 [2024-05-15 12:41:46.176468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.425 [2024-05-15 12:41:46.176504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.425 [2024-05-15 12:41:46.176523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:37.425 [2024-05-15 12:41:46.176534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.425 [2024-05-15 12:41:46.176547] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.425 [2024-05-15 12:41:46.176558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.425 [2024-05-15 12:41:46.176573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.425 [2024-05-15 12:41:46.176601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.425 [2024-05-15 12:41:46.176613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.425 [2024-05-15 12:41:46.176626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.425 [2024-05-15 12:41:46.176637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.425 [2024-05-15 12:41:46.176652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.425 [2024-05-15 12:41:46.176663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.425 [2024-05-15 12:41:46.176678] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.425 [2024-05-15 12:41:46.176693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.425 [2024-05-15 12:41:46.176708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:37.425 [2024-05-15 12:41:46.176721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:37.425 [2024-05-15 12:41:46.176734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:37.425 [2024-05-15 12:41:46.176747] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:37.425 [2024-05-15 12:41:46.176761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:37.425 [2024-05-15 12:41:46.176772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:37.425 [2024-05-15 12:41:46.176787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:37.425 [2024-05-15 12:41:46.176798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:37.425 [2024-05-15 12:41:46.176812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:37.425 [2024-05-15 12:41:46.176824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:37.425 [2024-05-15 12:41:46.176838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:37.425 [2024-05-15 12:41:46.176850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:37.425 [2024-05-15 12:41:46.176879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:37.425 [2024-05-15 12:41:46.176900] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.425 [2024-05-15 12:41:46.176920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.425 [2024-05-15 12:41:46.176932] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.425 [2024-05-15 12:41:46.176947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.425 [2024-05-15 12:41:46.176959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.425 [2024-05-15 12:41:46.176973] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.425 [2024-05-15 12:41:46.176987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.425 [2024-05-15 12:41:46.177001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.425 [2024-05-15 12:41:46.177014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:20:37.425 [2024-05-15 12:41:46.177028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.198971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.199023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.426 [2024-05-15 12:41:46.199043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.883 ms 00:20:37.426 [2024-05-15 12:41:46.199058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.199180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.199201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:37.426 [2024-05-15 12:41:46.199215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:37.426 [2024-05-15 12:41:46.199229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.242372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.242434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.426 [2024-05-15 12:41:46.242455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.067 ms 00:20:37.426 [2024-05-15 12:41:46.242470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.242553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.242579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.426 [2024-05-15 12:41:46.242593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.426 [2024-05-15 12:41:46.242608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.243228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.243262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.426 [2024-05-15 12:41:46.243281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:37.426 [2024-05-15 12:41:46.243295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.243450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.243478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.426 [2024-05-15 12:41:46.243504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:37.426 [2024-05-15 12:41:46.243521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.264914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.264973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.426 [2024-05-15 12:41:46.264993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.363 ms 00:20:37.426 [2024-05-15 12:41:46.265009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.279405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:37.426 [2024-05-15 12:41:46.283540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.283574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.426 [2024-05-15 12:41:46.283602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.383 ms 00:20:37.426 [2024-05-15 12:41:46.283615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.354967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.426 [2024-05-15 12:41:46.355043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:37.426 [2024-05-15 12:41:46.355089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.293 ms 00:20:37.426 [2024-05-15 12:41:46.355102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.426 [2024-05-15 12:41:46.355182] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:37.426 [2024-05-15 12:41:46.355203] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:39.969 [2024-05-15 12:41:48.931050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.969 [2024-05-15 12:41:48.931118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:39.969 [2024-05-15 12:41:48.931149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2575.872 ms 00:20:39.969 [2024-05-15 12:41:48.931162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.969 [2024-05-15 12:41:48.931419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.969 [2024-05-15 12:41:48.931447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.969 [2024-05-15 12:41:48.931465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:39.969 [2024-05-15 12:41:48.931478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.969 [2024-05-15 12:41:48.962312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.969 [2024-05-15 12:41:48.962367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:39.969 [2024-05-15 12:41:48.962388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.749 ms 00:20:39.969 [2024-05-15 12:41:48.962401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:48.992421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:48.992481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:40.227 [2024-05-15 12:41:48.992505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.965 ms 00:20:40.227 [2024-05-15 12:41:48.992531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:48.992998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:48.993025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.227 [2024-05-15 12:41:48.993043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:40.227 [2024-05-15 12:41:48.993055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.071340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.071401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:40.227 [2024-05-15 12:41:49.071425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.213 ms 00:20:40.227 [2024-05-15 12:41:49.071439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.103864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.103922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:40.227 [2024-05-15 12:41:49.103946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.360 ms 00:20:40.227 [2024-05-15 12:41:49.103963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.106348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.106398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:40.227 [2024-05-15 12:41:49.106419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.326 ms 00:20:40.227 [2024-05-15 12:41:49.106432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.137387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.137466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.227 [2024-05-15 12:41:49.137490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.859 ms 00:20:40.227 [2024-05-15 12:41:49.137522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.137603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.137624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.227 [2024-05-15 12:41:49.137642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:40.227 [2024-05-15 12:41:49.137654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.137789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-05-15 12:41:49.137808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.227 [2024-05-15 12:41:49.137825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:40.227 [2024-05-15 12:41:49.137840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-05-15 12:41:49.139123] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2982.349 ms, result 0 00:20:40.227 { 00:20:40.227 "name": "ftl0", 00:20:40.227 "uuid": "04fa0f24-bd68-4724-a523-fdd66996a60c" 00:20:40.227 } 00:20:40.227 12:41:49 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:40.227 12:41:49 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:40.485 12:41:49 -- ftl/restore.sh@63 -- # echo ']}' 00:20:40.485 12:41:49 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:40.744 [2024-05-15 12:41:49.690460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.690560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:40.744 [2024-05-15 12:41:49.690588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.744 [2024-05-15 12:41:49.690603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.744 [2024-05-15 12:41:49.690671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.744 [2024-05-15 12:41:49.694425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.694471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:40.744 [2024-05-15 12:41:49.694488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:20:40.744 [2024-05-15 12:41:49.694500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.744 [2024-05-15 12:41:49.694847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.694872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:40.744 [2024-05-15 12:41:49.694893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:20:40.744 [2024-05-15 12:41:49.694905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.744 [2024-05-15 12:41:49.698276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.698327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:40.744 [2024-05-15 12:41:49.698344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.329 ms 00:20:40.744 [2024-05-15 12:41:49.698355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.744 [2024-05-15 12:41:49.704830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.704858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:40.744 [2024-05-15 12:41:49.704874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.445 ms 00:20:40.744 [2024-05-15 12:41:49.704889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.744 [2024-05-15 12:41:49.735642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.744 [2024-05-15 12:41:49.735730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:40.744 [2024-05-15 12:41:49.735753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.649 ms 00:20:40.744 [2024-05-15 12:41:49.735765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.755410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.755467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.004 [2024-05-15 12:41:49.755489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.561 ms 00:20:41.004 [2024-05-15 12:41:49.755554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.755752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.755774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.004 [2024-05-15 12:41:49.755791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:20:41.004 [2024-05-15 12:41:49.755803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.787185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.787243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:41.004 [2024-05-15 12:41:49.787280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.347 ms 00:20:41.004 [2024-05-15 12:41:49.787293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.817325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.817384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:41.004 [2024-05-15 12:41:49.817419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.977 ms 00:20:41.004 [2024-05-15 12:41:49.817431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.847011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.847069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:41.004 [2024-05-15 12:41:49.847104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.527 ms 00:20:41.004 [2024-05-15 12:41:49.847116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.877184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.004 [2024-05-15 12:41:49.877262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:41.004 [2024-05-15 12:41:49.877287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.928 ms 00:20:41.004 [2024-05-15 12:41:49.877300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.004 [2024-05-15 12:41:49.877402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:41.004 [2024-05-15 12:41:49.877429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:41.004 [2024-05-15 12:41:49.877773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.877997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:41.005 [2024-05-15 12:41:49.878917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:41.005 [2024-05-15 12:41:49.878932] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04fa0f24-bd68-4724-a523-fdd66996a60c 00:20:41.005 [2024-05-15 12:41:49.878949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:41.005 [2024-05-15 12:41:49.878963] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:41.005 [2024-05-15 12:41:49.878974] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:41.005 [2024-05-15 12:41:49.878988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:41.005 [2024-05-15 12:41:49.878999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:41.005 [2024-05-15 12:41:49.879013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:41.005 [2024-05-15 12:41:49.879025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:41.005 [2024-05-15 12:41:49.879038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:41.005 [2024-05-15 12:41:49.879049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:41.005 [2024-05-15 12:41:49.879068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.005 [2024-05-15 12:41:49.879080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:41.005 [2024-05-15 12:41:49.879095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:20:41.005 [2024-05-15 12:41:49.879107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.005 [2024-05-15 12:41:49.896041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.005 [2024-05-15 12:41:49.896093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:41.005 [2024-05-15 12:41:49.896114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.841 ms 00:20:41.005 [2024-05-15 12:41:49.896126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.006 [2024-05-15 12:41:49.896396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.006 [2024-05-15 12:41:49.896420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:41.006 [2024-05-15 12:41:49.896437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:41.006 [2024-05-15 12:41:49.896448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.006 [2024-05-15 12:41:49.956214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.006 [2024-05-15 12:41:49.956290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.006 [2024-05-15 12:41:49.956313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.006 [2024-05-15 12:41:49.956325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.006 [2024-05-15 12:41:49.956423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.006 [2024-05-15 12:41:49.956438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.006 [2024-05-15 12:41:49.956453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.006 [2024-05-15 12:41:49.956465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.006 [2024-05-15 12:41:49.956626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.006 [2024-05-15 12:41:49.956647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.006 [2024-05-15 12:41:49.956664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.006 [2024-05-15 12:41:49.956676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.006 [2024-05-15 12:41:49.956707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.006 [2024-05-15 12:41:49.956721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.006 [2024-05-15 12:41:49.956736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.006 [2024-05-15 12:41:49.956748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.264 [2024-05-15 12:41:50.061835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.264 [2024-05-15 12:41:50.061914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.264 [2024-05-15 12:41:50.061937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.264 [2024-05-15 12:41:50.061952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.264 [2024-05-15 12:41:50.103580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.264 [2024-05-15 12:41:50.103655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.264 [2024-05-15 12:41:50.103678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.264 [2024-05-15 12:41:50.103691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.264 [2024-05-15 12:41:50.103811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.264 [2024-05-15 12:41:50.103830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.264 [2024-05-15 12:41:50.103846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.264 [2024-05-15 12:41:50.103858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.264 [2024-05-15 12:41:50.103928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.264 [2024-05-15 12:41:50.103945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.265 [2024-05-15 12:41:50.103960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.265 [2024-05-15 12:41:50.103973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.265 [2024-05-15 12:41:50.104105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.265 [2024-05-15 12:41:50.104129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.265 [2024-05-15 12:41:50.104145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.265 [2024-05-15 12:41:50.104157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.265 [2024-05-15 12:41:50.104221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.265 [2024-05-15 12:41:50.104239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.265 [2024-05-15 12:41:50.104254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.265 [2024-05-15 12:41:50.104267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.265 [2024-05-15 12:41:50.104321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.265 [2024-05-15 12:41:50.104351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.265 [2024-05-15 12:41:50.104367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.265 [2024-05-15 12:41:50.104379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.265 [2024-05-15 12:41:50.104440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.265 [2024-05-15 12:41:50.104457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.265 [2024-05-15 12:41:50.104472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.265 [2024-05-15 12:41:50.104484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.265 [2024-05-15 12:41:50.104682] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 414.172 ms, result 0 00:20:41.265 true 00:20:41.265 12:41:50 -- ftl/restore.sh@66 -- # killprocess 74698 00:20:41.265 12:41:50 -- common/autotest_common.sh@926 -- # '[' -z 74698 ']' 00:20:41.265 12:41:50 -- common/autotest_common.sh@930 -- # kill -0 74698 00:20:41.265 12:41:50 -- common/autotest_common.sh@931 -- # uname 00:20:41.265 12:41:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:41.265 12:41:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74698 00:20:41.265 12:41:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:41.265 killing process with pid 74698 00:20:41.265 12:41:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:41.265 12:41:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74698' 00:20:41.265 12:41:50 -- common/autotest_common.sh@945 -- # kill 74698 00:20:41.265 12:41:50 -- common/autotest_common.sh@950 -- # wait 74698 00:20:46.593 12:41:55 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:51.860 262144+0 records in 00:20:51.860 262144+0 records out 00:20:51.860 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.71016 s, 228 MB/s 00:20:51.860 12:42:00 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:53.246 12:42:02 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.503 [2024-05-15 12:42:02.315159] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:53.504 [2024-05-15 12:42:02.315331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74958 ] 00:20:53.504 [2024-05-15 12:42:02.492402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.762 [2024-05-15 12:42:02.767181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.328 [2024-05-15 12:42:03.112991] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.328 [2024-05-15 12:42:03.113108] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.328 [2024-05-15 12:42:03.269912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.328 [2024-05-15 12:42:03.269992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.328 [2024-05-15 12:42:03.270029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.328 [2024-05-15 12:42:03.270042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.328 [2024-05-15 12:42:03.270113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.328 [2024-05-15 12:42:03.270132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.328 [2024-05-15 12:42:03.270145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:54.328 [2024-05-15 12:42:03.270157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.328 [2024-05-15 12:42:03.270187] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.328 [2024-05-15 12:42:03.271111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.329 [2024-05-15 12:42:03.271152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.271167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.329 [2024-05-15 12:42:03.271180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:20:54.329 [2024-05-15 12:42:03.271192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.273131] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.329 [2024-05-15 12:42:03.290730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.290776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.329 [2024-05-15 12:42:03.290802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.600 ms 00:20:54.329 [2024-05-15 12:42:03.290815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.290917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.290935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.329 [2024-05-15 12:42:03.290948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:54.329 [2024-05-15 12:42:03.290959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.300109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.300178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.329 [2024-05-15 12:42:03.300227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.037 ms 00:20:54.329 [2024-05-15 12:42:03.300238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.300368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.300388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.329 [2024-05-15 12:42:03.300402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:54.329 [2024-05-15 12:42:03.300413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.300487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.300510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.329 [2024-05-15 12:42:03.300537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:54.329 [2024-05-15 12:42:03.300551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.300595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.329 [2024-05-15 12:42:03.305724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.305767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.329 [2024-05-15 12:42:03.305784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:20:54.329 [2024-05-15 12:42:03.305795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.305846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.305863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.329 [2024-05-15 12:42:03.305876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:54.329 [2024-05-15 12:42:03.305888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.305969] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.329 [2024-05-15 12:42:03.306003] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:54.329 [2024-05-15 12:42:03.306063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.329 [2024-05-15 12:42:03.306083] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:54.329 [2024-05-15 12:42:03.306163] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:54.329 [2024-05-15 12:42:03.306179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.329 [2024-05-15 12:42:03.306194] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:54.329 [2024-05-15 12:42:03.306209] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306223] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306240] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.329 [2024-05-15 12:42:03.306251] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.329 [2024-05-15 12:42:03.306262] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:54.329 [2024-05-15 12:42:03.306272] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:54.329 [2024-05-15 12:42:03.306285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.306296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.329 [2024-05-15 12:42:03.306308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:54.329 [2024-05-15 12:42:03.306319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.306396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.329 [2024-05-15 12:42:03.306411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.329 [2024-05-15 12:42:03.306428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:54.329 [2024-05-15 12:42:03.306439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.329 [2024-05-15 12:42:03.306539] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.329 [2024-05-15 12:42:03.306568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.329 [2024-05-15 12:42:03.306581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.329 [2024-05-15 12:42:03.306614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.329 [2024-05-15 12:42:03.306645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.329 [2024-05-15 12:42:03.306666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.329 [2024-05-15 12:42:03.306676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.329 [2024-05-15 12:42:03.306686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.329 [2024-05-15 12:42:03.306697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.329 [2024-05-15 12:42:03.306708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:54.329 [2024-05-15 12:42:03.306721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306732] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.329 [2024-05-15 12:42:03.306743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:54.329 [2024-05-15 12:42:03.306754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:54.329 [2024-05-15 12:42:03.306774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:54.329 [2024-05-15 12:42:03.306800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.329 [2024-05-15 12:42:03.306822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.329 [2024-05-15 12:42:03.306854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.329 [2024-05-15 12:42:03.306885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306905] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.329 [2024-05-15 12:42:03.306915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:54.329 [2024-05-15 12:42:03.306936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.329 [2024-05-15 12:42:03.306946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.329 [2024-05-15 12:42:03.306957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.329 [2024-05-15 12:42:03.306968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.329 [2024-05-15 12:42:03.306979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:54.329 [2024-05-15 12:42:03.306989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.329 [2024-05-15 12:42:03.306999] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.329 [2024-05-15 12:42:03.307011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.329 [2024-05-15 12:42:03.307022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.329 [2024-05-15 12:42:03.307033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.329 [2024-05-15 12:42:03.307050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.329 [2024-05-15 12:42:03.307062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.329 [2024-05-15 12:42:03.307072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.329 [2024-05-15 12:42:03.307085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.329 [2024-05-15 12:42:03.307096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.329 [2024-05-15 12:42:03.307107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.329 [2024-05-15 12:42:03.307119] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.329 [2024-05-15 12:42:03.307133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.330 [2024-05-15 12:42:03.307146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.330 [2024-05-15 12:42:03.307158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:54.330 [2024-05-15 12:42:03.307170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:54.330 [2024-05-15 12:42:03.307182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:54.330 [2024-05-15 12:42:03.307193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:54.330 [2024-05-15 12:42:03.307204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:54.330 [2024-05-15 12:42:03.307217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:54.330 [2024-05-15 12:42:03.307228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:54.330 [2024-05-15 12:42:03.307240] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:54.330 [2024-05-15 12:42:03.307252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:54.330 [2024-05-15 12:42:03.307263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:54.330 [2024-05-15 12:42:03.307274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:54.330 [2024-05-15 12:42:03.307286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:54.330 [2024-05-15 12:42:03.307298] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.330 [2024-05-15 12:42:03.307310] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.330 [2024-05-15 12:42:03.307323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.330 [2024-05-15 12:42:03.307334] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.330 [2024-05-15 12:42:03.307346] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.330 [2024-05-15 12:42:03.307358] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.330 [2024-05-15 12:42:03.307370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.330 [2024-05-15 12:42:03.307381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.330 [2024-05-15 12:42:03.307393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:20:54.330 [2024-05-15 12:42:03.307404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.330 [2024-05-15 12:42:03.329630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.330 [2024-05-15 12:42:03.329701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.330 [2024-05-15 12:42:03.329723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.168 ms 00:20:54.330 [2024-05-15 12:42:03.329735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.330 [2024-05-15 12:42:03.329867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.330 [2024-05-15 12:42:03.329883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.330 [2024-05-15 12:42:03.329903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:54.330 [2024-05-15 12:42:03.329915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.386825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.386912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.588 [2024-05-15 12:42:03.386933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.819 ms 00:20:54.588 [2024-05-15 12:42:03.386946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.387048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.387065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.588 [2024-05-15 12:42:03.387078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.588 [2024-05-15 12:42:03.387089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.387715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.387744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.588 [2024-05-15 12:42:03.387759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:20:54.588 [2024-05-15 12:42:03.387770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.387931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.387950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.588 [2024-05-15 12:42:03.387962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:54.588 [2024-05-15 12:42:03.387973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.407225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.407285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.588 [2024-05-15 12:42:03.407319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.225 ms 00:20:54.588 [2024-05-15 12:42:03.407331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.423646] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:54.588 [2024-05-15 12:42:03.423709] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.588 [2024-05-15 12:42:03.423743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.423756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.588 [2024-05-15 12:42:03.423770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.272 ms 00:20:54.588 [2024-05-15 12:42:03.423781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.452745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.452834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.588 [2024-05-15 12:42:03.452853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.911 ms 00:20:54.588 [2024-05-15 12:42:03.452866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.469170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.588 [2024-05-15 12:42:03.469247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.588 [2024-05-15 12:42:03.469282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.211 ms 00:20:54.588 [2024-05-15 12:42:03.469293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.588 [2024-05-15 12:42:03.484158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.484241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.589 [2024-05-15 12:42:03.484260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.814 ms 00:20:54.589 [2024-05-15 12:42:03.484272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.484884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.484928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.589 [2024-05-15 12:42:03.484944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:20:54.589 [2024-05-15 12:42:03.484956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.566791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.566878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.589 [2024-05-15 12:42:03.566900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.806 ms 00:20:54.589 [2024-05-15 12:42:03.566912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.579804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:54.589 [2024-05-15 12:42:03.583989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.584036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.589 [2024-05-15 12:42:03.584061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.998 ms 00:20:54.589 [2024-05-15 12:42:03.584074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.584202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.584222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.589 [2024-05-15 12:42:03.584241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:54.589 [2024-05-15 12:42:03.584252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.584345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.584363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.589 [2024-05-15 12:42:03.584377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:54.589 [2024-05-15 12:42:03.584389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.586506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.586560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:54.589 [2024-05-15 12:42:03.586576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:20:54.589 [2024-05-15 12:42:03.586594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.586634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.586649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.589 [2024-05-15 12:42:03.586662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.589 [2024-05-15 12:42:03.586673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.589 [2024-05-15 12:42:03.586729] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.589 [2024-05-15 12:42:03.586749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.589 [2024-05-15 12:42:03.586761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.589 [2024-05-15 12:42:03.586772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:54.589 [2024-05-15 12:42:03.586783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.847 [2024-05-15 12:42:03.618139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.847 [2024-05-15 12:42:03.618218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.847 [2024-05-15 12:42:03.618237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.322 ms 00:20:54.847 [2024-05-15 12:42:03.618249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.847 [2024-05-15 12:42:03.618341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.847 [2024-05-15 12:42:03.618359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.847 [2024-05-15 12:42:03.618373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:54.847 [2024-05-15 12:42:03.618391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.847 [2024-05-15 12:42:03.619783] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.333 ms, result 0 00:21:33.155  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 129/1024 [MB] (26 MBps) Copying: 155/1024 [MB] (25 MBps) Copying: 181/1024 [MB] (26 MBps) Copying: 208/1024 [MB] (27 MBps) Copying: 235/1024 [MB] (26 MBps) Copying: 262/1024 [MB] (27 MBps) Copying: 290/1024 [MB] (27 MBps) Copying: 317/1024 [MB] (27 MBps) Copying: 345/1024 [MB] (27 MBps) Copying: 371/1024 [MB] (26 MBps) Copying: 397/1024 [MB] (25 MBps) Copying: 424/1024 [MB] (27 MBps) Copying: 450/1024 [MB] (26 MBps) Copying: 478/1024 [MB] (27 MBps) Copying: 505/1024 [MB] (27 MBps) Copying: 532/1024 [MB] (26 MBps) Copying: 559/1024 [MB] (26 MBps) Copying: 586/1024 [MB] (27 MBps) Copying: 613/1024 [MB] (27 MBps) Copying: 640/1024 [MB] (27 MBps) Copying: 667/1024 [MB] (26 MBps) Copying: 693/1024 [MB] (26 MBps) Copying: 719/1024 [MB] (25 MBps) Copying: 746/1024 [MB] (27 MBps) Copying: 773/1024 [MB] (26 MBps) Copying: 800/1024 [MB] (26 MBps) Copying: 825/1024 [MB] (25 MBps) Copying: 852/1024 [MB] (26 MBps) Copying: 879/1024 [MB] (26 MBps) Copying: 905/1024 [MB] (26 MBps) Copying: 932/1024 [MB] (27 MBps) Copying: 960/1024 [MB] (27 MBps) Copying: 986/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 12:42:42.016118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.016168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.155 [2024-05-15 12:42:42.016189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:33.155 [2024-05-15 12:42:42.016203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.016233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.155 [2024-05-15 12:42:42.019903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.019952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.155 [2024-05-15 12:42:42.019983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:21:33.155 [2024-05-15 12:42:42.019994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.021822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.021869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.155 [2024-05-15 12:42:42.021885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:21:33.155 [2024-05-15 12:42:42.021897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.037376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.037419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.155 [2024-05-15 12:42:42.037449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:21:33.155 [2024-05-15 12:42:42.037461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.043912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.043972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:33.155 [2024-05-15 12:42:42.043987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.393 ms 00:21:33.155 [2024-05-15 12:42:42.043999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.076047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.076140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.155 [2024-05-15 12:42:42.076177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.969 ms 00:21:33.155 [2024-05-15 12:42:42.076189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.093829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.093876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.155 [2024-05-15 12:42:42.093894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.555 ms 00:21:33.155 [2024-05-15 12:42:42.093907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.094076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.094097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.155 [2024-05-15 12:42:42.094119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:33.155 [2024-05-15 12:42:42.094131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.124867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.124913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:33.155 [2024-05-15 12:42:42.124930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.713 ms 00:21:33.155 [2024-05-15 12:42:42.124942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.155 [2024-05-15 12:42:42.155405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.155 [2024-05-15 12:42:42.155465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:33.155 [2024-05-15 12:42:42.155500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.407 ms 00:21:33.155 [2024-05-15 12:42:42.155522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.185359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.414 [2024-05-15 12:42:42.185410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.414 [2024-05-15 12:42:42.185428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.792 ms 00:21:33.414 [2024-05-15 12:42:42.185439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.215324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.414 [2024-05-15 12:42:42.215389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.414 [2024-05-15 12:42:42.215407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.746 ms 00:21:33.414 [2024-05-15 12:42:42.215418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.215462] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.414 [2024-05-15 12:42:42.215485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.215992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.414 [2024-05-15 12:42:42.216772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.414 [2024-05-15 12:42:42.216785] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04fa0f24-bd68-4724-a523-fdd66996a60c 00:21:33.414 [2024-05-15 12:42:42.216797] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:33.414 [2024-05-15 12:42:42.216816] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:33.414 [2024-05-15 12:42:42.216827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:33.414 [2024-05-15 12:42:42.216839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:33.414 [2024-05-15 12:42:42.216849] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.414 [2024-05-15 12:42:42.216861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.414 [2024-05-15 12:42:42.216871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.414 [2024-05-15 12:42:42.216881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.414 [2024-05-15 12:42:42.216891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.414 [2024-05-15 12:42:42.216902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.414 [2024-05-15 12:42:42.216914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.414 [2024-05-15 12:42:42.216927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:21:33.414 [2024-05-15 12:42:42.216938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.233725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.414 [2024-05-15 12:42:42.233772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.414 [2024-05-15 12:42:42.233789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.717 ms 00:21:33.414 [2024-05-15 12:42:42.233801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.234057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.414 [2024-05-15 12:42:42.234086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.414 [2024-05-15 12:42:42.234101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:21:33.414 [2024-05-15 12:42:42.234112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.414 [2024-05-15 12:42:42.280364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.414 [2024-05-15 12:42:42.280444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.414 [2024-05-15 12:42:42.280479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.414 [2024-05-15 12:42:42.280491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.415 [2024-05-15 12:42:42.280582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.415 [2024-05-15 12:42:42.280599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.415 [2024-05-15 12:42:42.280612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.415 [2024-05-15 12:42:42.280624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.415 [2024-05-15 12:42:42.280735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.415 [2024-05-15 12:42:42.280755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.415 [2024-05-15 12:42:42.280768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.415 [2024-05-15 12:42:42.280779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.415 [2024-05-15 12:42:42.280811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.415 [2024-05-15 12:42:42.280828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.415 [2024-05-15 12:42:42.280841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.415 [2024-05-15 12:42:42.280852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.415 [2024-05-15 12:42:42.387196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.415 [2024-05-15 12:42:42.387272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.415 [2024-05-15 12:42:42.387292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.415 [2024-05-15 12:42:42.387304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.672 [2024-05-15 12:42:42.429011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.672 [2024-05-15 12:42:42.429070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.672 [2024-05-15 12:42:42.429096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.672 [2024-05-15 12:42:42.429109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.673 [2024-05-15 12:42:42.429248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.673 [2024-05-15 12:42:42.429348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.673 [2024-05-15 12:42:42.429562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:33.673 [2024-05-15 12:42:42.429656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.673 [2024-05-15 12:42:42.429747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.429812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.673 [2024-05-15 12:42:42.429828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.673 [2024-05-15 12:42:42.429840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.673 [2024-05-15 12:42:42.429851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.673 [2024-05-15 12:42:42.430001] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.846 ms, result 0 00:21:35.090 00:21:35.090 00:21:35.090 12:42:43 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:35.090 [2024-05-15 12:42:43.781567] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:35.090 [2024-05-15 12:42:43.781733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75381 ] 00:21:35.090 [2024-05-15 12:42:43.954707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.346 [2024-05-15 12:42:44.193195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.604 [2024-05-15 12:42:44.536867] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.604 [2024-05-15 12:42:44.536980] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.863 [2024-05-15 12:42:44.693968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.694066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.863 [2024-05-15 12:42:44.694097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:35.863 [2024-05-15 12:42:44.694112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.694180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.694198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.863 [2024-05-15 12:42:44.694212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:35.863 [2024-05-15 12:42:44.694224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.694255] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.863 [2024-05-15 12:42:44.695159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.863 [2024-05-15 12:42:44.695200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.695214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.863 [2024-05-15 12:42:44.695228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:35.863 [2024-05-15 12:42:44.695241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.697140] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.863 [2024-05-15 12:42:44.713826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.713872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.863 [2024-05-15 12:42:44.713897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.688 ms 00:21:35.863 [2024-05-15 12:42:44.713911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.713981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.714002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.863 [2024-05-15 12:42:44.714016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:35.863 [2024-05-15 12:42:44.714029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.722689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.722751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.863 [2024-05-15 12:42:44.722768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:21:35.863 [2024-05-15 12:42:44.722780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.722895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.722916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.863 [2024-05-15 12:42:44.722929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:35.863 [2024-05-15 12:42:44.722942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.722999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.723022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.863 [2024-05-15 12:42:44.723035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:35.863 [2024-05-15 12:42:44.723047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.723087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.863 [2024-05-15 12:42:44.728014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.728055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.863 [2024-05-15 12:42:44.728071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:21:35.863 [2024-05-15 12:42:44.728083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.728127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.728143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.863 [2024-05-15 12:42:44.728156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:35.863 [2024-05-15 12:42:44.728169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.728232] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.863 [2024-05-15 12:42:44.728265] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:35.863 [2024-05-15 12:42:44.728309] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.863 [2024-05-15 12:42:44.728331] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:35.863 [2024-05-15 12:42:44.728411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:35.863 [2024-05-15 12:42:44.728428] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.863 [2024-05-15 12:42:44.728443] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:35.863 [2024-05-15 12:42:44.728459] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.863 [2024-05-15 12:42:44.728474] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.863 [2024-05-15 12:42:44.728512] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.863 [2024-05-15 12:42:44.728527] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.863 [2024-05-15 12:42:44.728540] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:35.863 [2024-05-15 12:42:44.728551] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:35.863 [2024-05-15 12:42:44.728565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.728576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.863 [2024-05-15 12:42:44.728589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:21:35.863 [2024-05-15 12:42:44.728601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.728686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.863 [2024-05-15 12:42:44.728702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.863 [2024-05-15 12:42:44.728719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:35.863 [2024-05-15 12:42:44.728731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.863 [2024-05-15 12:42:44.728816] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.863 [2024-05-15 12:42:44.728832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.863 [2024-05-15 12:42:44.728845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.863 [2024-05-15 12:42:44.728858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.863 [2024-05-15 12:42:44.728870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.863 [2024-05-15 12:42:44.728882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.863 [2024-05-15 12:42:44.728893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.863 [2024-05-15 12:42:44.728904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.863 [2024-05-15 12:42:44.728916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.863 [2024-05-15 12:42:44.728927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.863 [2024-05-15 12:42:44.728938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.863 [2024-05-15 12:42:44.728949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.863 [2024-05-15 12:42:44.728959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.863 [2024-05-15 12:42:44.728970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.863 [2024-05-15 12:42:44.728981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:35.863 [2024-05-15 12:42:44.728994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.863 [2024-05-15 12:42:44.729017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:35.863 [2024-05-15 12:42:44.729028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:35.863 [2024-05-15 12:42:44.729051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:35.863 [2024-05-15 12:42:44.729076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:35.863 [2024-05-15 12:42:44.729088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.863 [2024-05-15 12:42:44.729100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:35.863 [2024-05-15 12:42:44.729122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.863 [2024-05-15 12:42:44.729133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:35.863 [2024-05-15 12:42:44.729155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.863 [2024-05-15 12:42:44.729166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:35.863 [2024-05-15 12:42:44.729188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.863 [2024-05-15 12:42:44.729199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:35.863 [2024-05-15 12:42:44.729222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.863 [2024-05-15 12:42:44.729233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.863 [2024-05-15 12:42:44.729252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.863 [2024-05-15 12:42:44.729264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.864 [2024-05-15 12:42:44.729275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:35.864 [2024-05-15 12:42:44.729285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.864 [2024-05-15 12:42:44.729296] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.864 [2024-05-15 12:42:44.729308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.864 [2024-05-15 12:42:44.729320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.864 [2024-05-15 12:42:44.729332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.864 [2024-05-15 12:42:44.729351] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.864 [2024-05-15 12:42:44.729362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.864 [2024-05-15 12:42:44.729374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.864 [2024-05-15 12:42:44.729387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.864 [2024-05-15 12:42:44.729398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.864 [2024-05-15 12:42:44.729410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.864 [2024-05-15 12:42:44.729423] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.864 [2024-05-15 12:42:44.729438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.864 [2024-05-15 12:42:44.729451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.864 [2024-05-15 12:42:44.729463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:35.864 [2024-05-15 12:42:44.729475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:35.864 [2024-05-15 12:42:44.729487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:35.864 [2024-05-15 12:42:44.729531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:35.864 [2024-05-15 12:42:44.729546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:35.864 [2024-05-15 12:42:44.729558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:35.864 [2024-05-15 12:42:44.729570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:35.864 [2024-05-15 12:42:44.729582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:35.864 [2024-05-15 12:42:44.729594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:35.864 [2024-05-15 12:42:44.729606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:35.864 [2024-05-15 12:42:44.729618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:35.864 [2024-05-15 12:42:44.729630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:35.864 [2024-05-15 12:42:44.729642] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.864 [2024-05-15 12:42:44.729655] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.864 [2024-05-15 12:42:44.729669] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.864 [2024-05-15 12:42:44.729681] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.864 [2024-05-15 12:42:44.729693] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.864 [2024-05-15 12:42:44.729705] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.864 [2024-05-15 12:42:44.729718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.729731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.864 [2024-05-15 12:42:44.729743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:21:35.864 [2024-05-15 12:42:44.729755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.752044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.752099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.864 [2024-05-15 12:42:44.752118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.230 ms 00:21:35.864 [2024-05-15 12:42:44.752131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.752240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.752261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.864 [2024-05-15 12:42:44.752280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:35.864 [2024-05-15 12:42:44.752292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.809253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.809325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.864 [2024-05-15 12:42:44.809346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.884 ms 00:21:35.864 [2024-05-15 12:42:44.809361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.809444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.809462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.864 [2024-05-15 12:42:44.809476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.864 [2024-05-15 12:42:44.809488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.810166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.810198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.864 [2024-05-15 12:42:44.810214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:21:35.864 [2024-05-15 12:42:44.810226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.810387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.810407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.864 [2024-05-15 12:42:44.810420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:35.864 [2024-05-15 12:42:44.810432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.830649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.830698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.864 [2024-05-15 12:42:44.830716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.188 ms 00:21:35.864 [2024-05-15 12:42:44.830729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.864 [2024-05-15 12:42:44.847519] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:35.864 [2024-05-15 12:42:44.847572] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.864 [2024-05-15 12:42:44.847591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.864 [2024-05-15 12:42:44.847605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.864 [2024-05-15 12:42:44.847620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.695 ms 00:21:35.864 [2024-05-15 12:42:44.847632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.876932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:44.877010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:36.123 [2024-05-15 12:42:44.877030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.249 ms 00:21:36.123 [2024-05-15 12:42:44.877044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.892434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:44.892476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:36.123 [2024-05-15 12:42:44.892503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.332 ms 00:21:36.123 [2024-05-15 12:42:44.892518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.907506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:44.907546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:36.123 [2024-05-15 12:42:44.907563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.941 ms 00:21:36.123 [2024-05-15 12:42:44.907575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.908121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:44.908157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:36.123 [2024-05-15 12:42:44.908174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:21:36.123 [2024-05-15 12:42:44.908187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.987232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:44.987306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:36.123 [2024-05-15 12:42:44.987327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.018 ms 00:21:36.123 [2024-05-15 12:42:44.987340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:44.999847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:36.123 [2024-05-15 12:42:45.003834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.003873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:36.123 [2024-05-15 12:42:45.003892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.422 ms 00:21:36.123 [2024-05-15 12:42:45.003905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.004031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.004054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:36.123 [2024-05-15 12:42:45.004068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:36.123 [2024-05-15 12:42:45.004080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.004172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.004195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:36.123 [2024-05-15 12:42:45.004209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:36.123 [2024-05-15 12:42:45.004221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.006280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.006321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:36.123 [2024-05-15 12:42:45.006342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:21:36.123 [2024-05-15 12:42:45.006354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.006392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.006408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:36.123 [2024-05-15 12:42:45.006421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:36.123 [2024-05-15 12:42:45.006433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.006485] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:36.123 [2024-05-15 12:42:45.006516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.006529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:36.123 [2024-05-15 12:42:45.006542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:36.123 [2024-05-15 12:42:45.006559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.037419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.037534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:36.123 [2024-05-15 12:42:45.037557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.831 ms 00:21:36.123 [2024-05-15 12:42:45.037571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.037670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.123 [2024-05-15 12:42:45.037699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:36.123 [2024-05-15 12:42:45.037714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:36.123 [2024-05-15 12:42:45.037727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.123 [2024-05-15 12:42:45.039172] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.678 ms, result 0 00:22:14.807  Copying: 27/1024 [MB] (27 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (25 MBps) Copying: 106/1024 [MB] (27 MBps) Copying: 132/1024 [MB] (26 MBps) Copying: 158/1024 [MB] (25 MBps) Copying: 184/1024 [MB] (26 MBps) Copying: 211/1024 [MB] (26 MBps) Copying: 237/1024 [MB] (25 MBps) Copying: 264/1024 [MB] (27 MBps) Copying: 292/1024 [MB] (27 MBps) Copying: 318/1024 [MB] (26 MBps) Copying: 345/1024 [MB] (26 MBps) Copying: 372/1024 [MB] (27 MBps) Copying: 400/1024 [MB] (27 MBps) Copying: 427/1024 [MB] (27 MBps) Copying: 455/1024 [MB] (28 MBps) Copying: 483/1024 [MB] (27 MBps) Copying: 511/1024 [MB] (27 MBps) Copying: 539/1024 [MB] (27 MBps) Copying: 566/1024 [MB] (27 MBps) Copying: 592/1024 [MB] (26 MBps) Copying: 619/1024 [MB] (27 MBps) Copying: 646/1024 [MB] (26 MBps) Copying: 674/1024 [MB] (27 MBps) Copying: 702/1024 [MB] (27 MBps) Copying: 730/1024 [MB] (27 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 785/1024 [MB] (27 MBps) Copying: 812/1024 [MB] (27 MBps) Copying: 839/1024 [MB] (26 MBps) Copying: 865/1024 [MB] (26 MBps) Copying: 891/1024 [MB] (26 MBps) Copying: 918/1024 [MB] (26 MBps) Copying: 944/1024 [MB] (26 MBps) Copying: 972/1024 [MB] (27 MBps) Copying: 999/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-05-15 12:43:23.591771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.591858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:14.807 [2024-05-15 12:43:23.591881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:14.807 [2024-05-15 12:43:23.591894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.591928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.807 [2024-05-15 12:43:23.595927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.595974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:14.807 [2024-05-15 12:43:23.595990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.964 ms 00:22:14.807 [2024-05-15 12:43:23.596009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.596286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.596315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:14.807 [2024-05-15 12:43:23.596329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:14.807 [2024-05-15 12:43:23.596342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.599803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.599831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:14.807 [2024-05-15 12:43:23.599846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:22:14.807 [2024-05-15 12:43:23.599857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.606707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.606737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:14.807 [2024-05-15 12:43:23.606752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.818 ms 00:22:14.807 [2024-05-15 12:43:23.606764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.640410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.640466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:14.807 [2024-05-15 12:43:23.640487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.563 ms 00:22:14.807 [2024-05-15 12:43:23.640515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.659065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.659125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:14.807 [2024-05-15 12:43:23.659145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.482 ms 00:22:14.807 [2024-05-15 12:43:23.659158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.659372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.659412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:14.807 [2024-05-15 12:43:23.659428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:22:14.807 [2024-05-15 12:43:23.659440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.690975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.691050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:14.807 [2024-05-15 12:43:23.691071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.507 ms 00:22:14.807 [2024-05-15 12:43:23.691084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.721273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.721318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:14.807 [2024-05-15 12:43:23.721338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.116 ms 00:22:14.807 [2024-05-15 12:43:23.721351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.751024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.751066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:14.807 [2024-05-15 12:43:23.751085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.622 ms 00:22:14.807 [2024-05-15 12:43:23.751098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.780764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.807 [2024-05-15 12:43:23.780806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:14.807 [2024-05-15 12:43:23.780825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.551 ms 00:22:14.807 [2024-05-15 12:43:23.780838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.807 [2024-05-15 12:43:23.780882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:14.807 [2024-05-15 12:43:23.780907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.780999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:14.807 [2024-05-15 12:43:23.781441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.781998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:14.808 [2024-05-15 12:43:23.782237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:14.808 [2024-05-15 12:43:23.782249] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04fa0f24-bd68-4724-a523-fdd66996a60c 00:22:14.808 [2024-05-15 12:43:23.782270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:14.808 [2024-05-15 12:43:23.782282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:14.808 [2024-05-15 12:43:23.782293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:14.808 [2024-05-15 12:43:23.782305] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:14.808 [2024-05-15 12:43:23.782317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:14.808 [2024-05-15 12:43:23.782329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:14.808 [2024-05-15 12:43:23.782340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:14.808 [2024-05-15 12:43:23.782351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:14.808 [2024-05-15 12:43:23.782361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:14.808 [2024-05-15 12:43:23.782373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.808 [2024-05-15 12:43:23.782390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:14.808 [2024-05-15 12:43:23.782404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:22:14.808 [2024-05-15 12:43:23.782429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.808 [2024-05-15 12:43:23.799159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.808 [2024-05-15 12:43:23.799196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:14.808 [2024-05-15 12:43:23.799213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.674 ms 00:22:14.808 [2024-05-15 12:43:23.799225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.808 [2024-05-15 12:43:23.799480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.808 [2024-05-15 12:43:23.799523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:14.808 [2024-05-15 12:43:23.799539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:14.808 [2024-05-15 12:43:23.799559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.846419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.846481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.072 [2024-05-15 12:43:23.846511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.846525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.846619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.846636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.072 [2024-05-15 12:43:23.846649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.846669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.846779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.846798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.072 [2024-05-15 12:43:23.846812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.846824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.846847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.846861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.072 [2024-05-15 12:43:23.846873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.846885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.950904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.950985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.072 [2024-05-15 12:43:23.951004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.951017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.072 [2024-05-15 12:43:23.991220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.072 [2024-05-15 12:43:23.991372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.072 [2024-05-15 12:43:23.991467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.072 [2024-05-15 12:43:23.991665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.072 [2024-05-15 12:43:23.991759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.072 [2024-05-15 12:43:23.991849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.072 [2024-05-15 12:43:23.991861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-05-15 12:43:23.991913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.072 [2024-05-15 12:43:23.991928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.073 [2024-05-15 12:43:23.991940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.073 [2024-05-15 12:43:23.991953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.073 [2024-05-15 12:43:23.992097] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.293 ms, result 0 00:22:16.446 00:22:16.446 00:22:16.446 12:43:25 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:18.974 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:18.974 12:43:27 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:18.974 [2024-05-15 12:43:27.514176] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:18.974 [2024-05-15 12:43:27.514352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75827 ] 00:22:18.974 [2024-05-15 12:43:27.685438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.974 [2024-05-15 12:43:27.947170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.541 [2024-05-15 12:43:28.291643] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.541 [2024-05-15 12:43:28.291727] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.541 [2024-05-15 12:43:28.447489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.447578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:19.541 [2024-05-15 12:43:28.447599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:19.541 [2024-05-15 12:43:28.447613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.447681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.447700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.541 [2024-05-15 12:43:28.447713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:19.541 [2024-05-15 12:43:28.447724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.447753] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:19.541 [2024-05-15 12:43:28.448665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:19.541 [2024-05-15 12:43:28.448704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.448719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.541 [2024-05-15 12:43:28.448731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:22:19.541 [2024-05-15 12:43:28.448743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.450667] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:19.541 [2024-05-15 12:43:28.467131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.467182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:19.541 [2024-05-15 12:43:28.467220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.466 ms 00:22:19.541 [2024-05-15 12:43:28.467237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.467329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.467354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:19.541 [2024-05-15 12:43:28.467367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:19.541 [2024-05-15 12:43:28.467379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.475991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.476034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.541 [2024-05-15 12:43:28.476051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.522 ms 00:22:19.541 [2024-05-15 12:43:28.476062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.476187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.476207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.541 [2024-05-15 12:43:28.476220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:19.541 [2024-05-15 12:43:28.476231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.476286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.476308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:19.541 [2024-05-15 12:43:28.476320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:19.541 [2024-05-15 12:43:28.476332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.476371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:19.541 [2024-05-15 12:43:28.481341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.481375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.541 [2024-05-15 12:43:28.481390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:22:19.541 [2024-05-15 12:43:28.481402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.481442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.541 [2024-05-15 12:43:28.481457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:19.541 [2024-05-15 12:43:28.481469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:19.541 [2024-05-15 12:43:28.481480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.541 [2024-05-15 12:43:28.481587] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:19.541 [2024-05-15 12:43:28.481622] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:19.541 [2024-05-15 12:43:28.481667] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:19.541 [2024-05-15 12:43:28.481688] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:19.541 [2024-05-15 12:43:28.481770] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:19.541 [2024-05-15 12:43:28.481795] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:19.541 [2024-05-15 12:43:28.481811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:19.541 [2024-05-15 12:43:28.481826] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:19.541 [2024-05-15 12:43:28.481840] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:19.541 [2024-05-15 12:43:28.481857] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:19.541 [2024-05-15 12:43:28.481868] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:19.542 [2024-05-15 12:43:28.481878] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:19.542 [2024-05-15 12:43:28.481888] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:19.542 [2024-05-15 12:43:28.481900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-05-15 12:43:28.481911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:19.542 [2024-05-15 12:43:28.481922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:22:19.542 [2024-05-15 12:43:28.481933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-05-15 12:43:28.482010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-05-15 12:43:28.482032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:19.542 [2024-05-15 12:43:28.482049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:19.542 [2024-05-15 12:43:28.482059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-05-15 12:43:28.482146] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:19.542 [2024-05-15 12:43:28.482161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:19.542 [2024-05-15 12:43:28.482174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:19.542 [2024-05-15 12:43:28.482205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:19.542 [2024-05-15 12:43:28.482236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.542 [2024-05-15 12:43:28.482256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:19.542 [2024-05-15 12:43:28.482266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:19.542 [2024-05-15 12:43:28.482276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.542 [2024-05-15 12:43:28.482285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:19.542 [2024-05-15 12:43:28.482295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:19.542 [2024-05-15 12:43:28.482305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:19.542 [2024-05-15 12:43:28.482327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:19.542 [2024-05-15 12:43:28.482338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:19.542 [2024-05-15 12:43:28.482359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:19.542 [2024-05-15 12:43:28.482382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:19.542 [2024-05-15 12:43:28.482403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:19.542 [2024-05-15 12:43:28.482433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:19.542 [2024-05-15 12:43:28.482463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:19.542 [2024-05-15 12:43:28.482506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:19.542 [2024-05-15 12:43:28.482539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.542 [2024-05-15 12:43:28.482559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:19.542 [2024-05-15 12:43:28.482569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:19.542 [2024-05-15 12:43:28.482579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.542 [2024-05-15 12:43:28.482589] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:19.542 [2024-05-15 12:43:28.482600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:19.542 [2024-05-15 12:43:28.482611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.542 [2024-05-15 12:43:28.482638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:19.542 [2024-05-15 12:43:28.482648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:19.542 [2024-05-15 12:43:28.482658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:19.542 [2024-05-15 12:43:28.482668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:19.542 [2024-05-15 12:43:28.482678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:19.542 [2024-05-15 12:43:28.482689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:19.542 [2024-05-15 12:43:28.482700] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:19.542 [2024-05-15 12:43:28.482714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.542 [2024-05-15 12:43:28.482727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:19.542 [2024-05-15 12:43:28.482738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:19.542 [2024-05-15 12:43:28.482749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:19.542 [2024-05-15 12:43:28.482760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:19.542 [2024-05-15 12:43:28.482770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:19.542 [2024-05-15 12:43:28.482781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:19.542 [2024-05-15 12:43:28.482792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:19.542 [2024-05-15 12:43:28.482803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:19.542 [2024-05-15 12:43:28.482814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:19.542 [2024-05-15 12:43:28.482825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:19.542 [2024-05-15 12:43:28.482836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:19.542 [2024-05-15 12:43:28.482847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:19.542 [2024-05-15 12:43:28.482858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:19.542 [2024-05-15 12:43:28.482869] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:19.542 [2024-05-15 12:43:28.482880] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.542 [2024-05-15 12:43:28.482892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:19.542 [2024-05-15 12:43:28.482903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:19.542 [2024-05-15 12:43:28.482914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:19.542 [2024-05-15 12:43:28.482925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:19.542 [2024-05-15 12:43:28.482937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-05-15 12:43:28.482948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:19.542 [2024-05-15 12:43:28.482959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:22:19.542 [2024-05-15 12:43:28.482970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-05-15 12:43:28.504791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-05-15 12:43:28.504840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.542 [2024-05-15 12:43:28.504858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.766 ms 00:22:19.542 [2024-05-15 12:43:28.504870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.542 [2024-05-15 12:43:28.504977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.542 [2024-05-15 12:43:28.504992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:19.542 [2024-05-15 12:43:28.505010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:19.542 [2024-05-15 12:43:28.505021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.801 [2024-05-15 12:43:28.555337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.801 [2024-05-15 12:43:28.555414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.801 [2024-05-15 12:43:28.555434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.244 ms 00:22:19.801 [2024-05-15 12:43:28.555446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.801 [2024-05-15 12:43:28.555540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.801 [2024-05-15 12:43:28.555559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.801 [2024-05-15 12:43:28.555573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:19.801 [2024-05-15 12:43:28.555584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.801 [2024-05-15 12:43:28.556203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.801 [2024-05-15 12:43:28.556230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.801 [2024-05-15 12:43:28.556261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:22:19.801 [2024-05-15 12:43:28.556272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.801 [2024-05-15 12:43:28.556428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.801 [2024-05-15 12:43:28.556447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.801 [2024-05-15 12:43:28.556460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:19.801 [2024-05-15 12:43:28.556470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.801 [2024-05-15 12:43:28.576089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.801 [2024-05-15 12:43:28.576153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.801 [2024-05-15 12:43:28.576171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.591 ms 00:22:19.801 [2024-05-15 12:43:28.576183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.592679] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:19.802 [2024-05-15 12:43:28.592726] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:19.802 [2024-05-15 12:43:28.592743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.592756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:19.802 [2024-05-15 12:43:28.592769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:22:19.802 [2024-05-15 12:43:28.592781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.621210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.621308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:19.802 [2024-05-15 12:43:28.621328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.379 ms 00:22:19.802 [2024-05-15 12:43:28.621340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.638406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.638526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:19.802 [2024-05-15 12:43:28.638547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.991 ms 00:22:19.802 [2024-05-15 12:43:28.638559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.654650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.654697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:19.802 [2024-05-15 12:43:28.654715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.011 ms 00:22:19.802 [2024-05-15 12:43:28.654726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.655255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.655292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:19.802 [2024-05-15 12:43:28.655309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:22:19.802 [2024-05-15 12:43:28.655320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.733576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.733650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:19.802 [2024-05-15 12:43:28.733671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.230 ms 00:22:19.802 [2024-05-15 12:43:28.733683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.745897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:19.802 [2024-05-15 12:43:28.749470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.749520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:19.802 [2024-05-15 12:43:28.749541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.708 ms 00:22:19.802 [2024-05-15 12:43:28.749553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.749667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.749689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:19.802 [2024-05-15 12:43:28.749703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:19.802 [2024-05-15 12:43:28.749714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.749808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.749835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:19.802 [2024-05-15 12:43:28.749849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:19.802 [2024-05-15 12:43:28.749861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.751915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.751951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:19.802 [2024-05-15 12:43:28.751970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:22:19.802 [2024-05-15 12:43:28.751980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.752016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.752031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:19.802 [2024-05-15 12:43:28.752043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:19.802 [2024-05-15 12:43:28.752054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.752104] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:19.802 [2024-05-15 12:43:28.752121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.752132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:19.802 [2024-05-15 12:43:28.752143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:19.802 [2024-05-15 12:43:28.752158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.783059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.783126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:19.802 [2024-05-15 12:43:28.783145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.862 ms 00:22:19.802 [2024-05-15 12:43:28.783157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.783241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.802 [2024-05-15 12:43:28.783266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:19.802 [2024-05-15 12:43:28.783279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:19.802 [2024-05-15 12:43:28.783290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.802 [2024-05-15 12:43:28.784728] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.604 ms, result 0 00:22:58.856  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (24 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 156/1024 [MB] (25 MBps) Copying: 183/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (26 MBps) Copying: 236/1024 [MB] (26 MBps) Copying: 263/1024 [MB] (27 MBps) Copying: 290/1024 [MB] (27 MBps) Copying: 319/1024 [MB] (28 MBps) Copying: 347/1024 [MB] (27 MBps) Copying: 375/1024 [MB] (28 MBps) Copying: 402/1024 [MB] (27 MBps) Copying: 430/1024 [MB] (27 MBps) Copying: 457/1024 [MB] (27 MBps) Copying: 485/1024 [MB] (27 MBps) Copying: 512/1024 [MB] (27 MBps) Copying: 540/1024 [MB] (27 MBps) Copying: 567/1024 [MB] (26 MBps) Copying: 593/1024 [MB] (26 MBps) Copying: 621/1024 [MB] (27 MBps) Copying: 648/1024 [MB] (26 MBps) Copying: 675/1024 [MB] (27 MBps) Copying: 703/1024 [MB] (27 MBps) Copying: 730/1024 [MB] (27 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 785/1024 [MB] (27 MBps) Copying: 812/1024 [MB] (27 MBps) Copying: 841/1024 [MB] (28 MBps) Copying: 869/1024 [MB] (28 MBps) Copying: 896/1024 [MB] (27 MBps) Copying: 924/1024 [MB] (27 MBps) Copying: 951/1024 [MB] (27 MBps) Copying: 978/1024 [MB] (26 MBps) Copying: 1005/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 12:44:07.561482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.856 [2024-05-15 12:44:07.561578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:58.856 [2024-05-15 12:44:07.561601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.856 [2024-05-15 12:44:07.561626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.856 [2024-05-15 12:44:07.565103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:58.856 [2024-05-15 12:44:07.569665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.856 [2024-05-15 12:44:07.569708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:58.856 [2024-05-15 12:44:07.569726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:22:58.856 [2024-05-15 12:44:07.569737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.856 [2024-05-15 12:44:07.581260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.856 [2024-05-15 12:44:07.581310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:58.857 [2024-05-15 12:44:07.581329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:22:58.857 [2024-05-15 12:44:07.581341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.603013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.603060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:58.857 [2024-05-15 12:44:07.603079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.639 ms 00:22:58.857 [2024-05-15 12:44:07.603090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.609572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.609609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:58.857 [2024-05-15 12:44:07.609624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.444 ms 00:22:58.857 [2024-05-15 12:44:07.609635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.640671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.640719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:58.857 [2024-05-15 12:44:07.640737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.955 ms 00:22:58.857 [2024-05-15 12:44:07.640748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.658602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.658672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:58.857 [2024-05-15 12:44:07.658690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.799 ms 00:22:58.857 [2024-05-15 12:44:07.658702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.752659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.752731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:58.857 [2024-05-15 12:44:07.752754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.901 ms 00:22:58.857 [2024-05-15 12:44:07.752767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.784383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.784434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:58.857 [2024-05-15 12:44:07.784454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.593 ms 00:22:58.857 [2024-05-15 12:44:07.784465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.814758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.814806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:58.857 [2024-05-15 12:44:07.814833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.237 ms 00:22:58.857 [2024-05-15 12:44:07.814844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.857 [2024-05-15 12:44:07.844785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.857 [2024-05-15 12:44:07.844841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:58.857 [2024-05-15 12:44:07.844860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.894 ms 00:22:58.857 [2024-05-15 12:44:07.844871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.115 [2024-05-15 12:44:07.874802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.115 [2024-05-15 12:44:07.874871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:59.115 [2024-05-15 12:44:07.874890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.819 ms 00:22:59.115 [2024-05-15 12:44:07.874902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.115 [2024-05-15 12:44:07.874945] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:59.115 [2024-05-15 12:44:07.874969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116224 / 261120 wr_cnt: 1 state: open 00:22:59.115 [2024-05-15 12:44:07.874985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.874997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:59.115 [2024-05-15 12:44:07.875676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.875993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:59.116 [2024-05-15 12:44:07.876201] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:59.116 [2024-05-15 12:44:07.876212] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04fa0f24-bd68-4724-a523-fdd66996a60c 00:22:59.116 [2024-05-15 12:44:07.876223] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116224 00:22:59.116 [2024-05-15 12:44:07.876235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117184 00:22:59.116 [2024-05-15 12:44:07.876245] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116224 00:22:59.116 [2024-05-15 12:44:07.876257] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:22:59.116 [2024-05-15 12:44:07.876267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:59.116 [2024-05-15 12:44:07.876279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:59.116 [2024-05-15 12:44:07.876295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:59.116 [2024-05-15 12:44:07.876306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:59.116 [2024-05-15 12:44:07.876316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:59.116 [2024-05-15 12:44:07.876326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.116 [2024-05-15 12:44:07.876338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:59.116 [2024-05-15 12:44:07.876349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:22:59.116 [2024-05-15 12:44:07.876372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.893121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.116 [2024-05-15 12:44:07.893170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:59.116 [2024-05-15 12:44:07.893188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.695 ms 00:22:59.116 [2024-05-15 12:44:07.893208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.893467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.116 [2024-05-15 12:44:07.893490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:59.116 [2024-05-15 12:44:07.893532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:59.116 [2024-05-15 12:44:07.893544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.940826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:07.940890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.116 [2024-05-15 12:44:07.940914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:07.940933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.941014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:07.941028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.116 [2024-05-15 12:44:07.941041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:07.941051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.941156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:07.941175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.116 [2024-05-15 12:44:07.941187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:07.941205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:07.941228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:07.941241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.116 [2024-05-15 12:44:07.941253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:07.941264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:08.041341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:08.041418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.116 [2024-05-15 12:44:08.041460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:08.041472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:08.081439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:08.081515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.116 [2024-05-15 12:44:08.081543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:08.081555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:08.081656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:08.081675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.116 [2024-05-15 12:44:08.081688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:08.081700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:08.081764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.116 [2024-05-15 12:44:08.081780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.116 [2024-05-15 12:44:08.081792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.116 [2024-05-15 12:44:08.081803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.116 [2024-05-15 12:44:08.081938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.117 [2024-05-15 12:44:08.081966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.117 [2024-05-15 12:44:08.081980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.117 [2024-05-15 12:44:08.081991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.117 [2024-05-15 12:44:08.082045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.117 [2024-05-15 12:44:08.082071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:59.117 [2024-05-15 12:44:08.082083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.117 [2024-05-15 12:44:08.082094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.117 [2024-05-15 12:44:08.082138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.117 [2024-05-15 12:44:08.082153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.117 [2024-05-15 12:44:08.082165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.117 [2024-05-15 12:44:08.082176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.117 [2024-05-15 12:44:08.082232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.117 [2024-05-15 12:44:08.082257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.117 [2024-05-15 12:44:08.082271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.117 [2024-05-15 12:44:08.082282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.117 [2024-05-15 12:44:08.082425] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.101 ms, result 0 00:23:01.021 00:23:01.021 00:23:01.021 12:44:09 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:01.021 [2024-05-15 12:44:09.853133] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:01.021 [2024-05-15 12:44:09.853317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76255 ] 00:23:01.021 [2024-05-15 12:44:10.016472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.324 [2024-05-15 12:44:10.252854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.893 [2024-05-15 12:44:10.596191] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:01.893 [2024-05-15 12:44:10.596282] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:01.893 [2024-05-15 12:44:10.751848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.751912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.893 [2024-05-15 12:44:10.751945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:01.893 [2024-05-15 12:44:10.751957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.752031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.752050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.893 [2024-05-15 12:44:10.752063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:01.893 [2024-05-15 12:44:10.752074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.752104] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.893 [2024-05-15 12:44:10.753006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.893 [2024-05-15 12:44:10.753045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.753059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.893 [2024-05-15 12:44:10.753072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:23:01.893 [2024-05-15 12:44:10.753083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.755002] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:01.893 [2024-05-15 12:44:10.771480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.771554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:01.893 [2024-05-15 12:44:10.771578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.479 ms 00:23:01.893 [2024-05-15 12:44:10.771590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.771661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.771680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:01.893 [2024-05-15 12:44:10.771693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:01.893 [2024-05-15 12:44:10.771704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.780435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.780486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.893 [2024-05-15 12:44:10.780520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:23:01.893 [2024-05-15 12:44:10.780533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.780669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.780690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.893 [2024-05-15 12:44:10.780702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:01.893 [2024-05-15 12:44:10.780714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.780774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.780796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.893 [2024-05-15 12:44:10.780809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:01.893 [2024-05-15 12:44:10.780820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.780867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.893 [2024-05-15 12:44:10.785851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.785904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.893 [2024-05-15 12:44:10.785934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.003 ms 00:23:01.893 [2024-05-15 12:44:10.785945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.785987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.786003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.893 [2024-05-15 12:44:10.786015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:01.893 [2024-05-15 12:44:10.786026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.786093] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:01.893 [2024-05-15 12:44:10.786125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:01.893 [2024-05-15 12:44:10.786167] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:01.893 [2024-05-15 12:44:10.786187] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:01.893 [2024-05-15 12:44:10.786267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:01.893 [2024-05-15 12:44:10.786282] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.893 [2024-05-15 12:44:10.786297] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:01.893 [2024-05-15 12:44:10.786311] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.893 [2024-05-15 12:44:10.786325] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.893 [2024-05-15 12:44:10.786341] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.893 [2024-05-15 12:44:10.786353] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.893 [2024-05-15 12:44:10.786364] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:01.893 [2024-05-15 12:44:10.786375] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:01.893 [2024-05-15 12:44:10.786387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.786398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.893 [2024-05-15 12:44:10.786410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:01.893 [2024-05-15 12:44:10.786421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.786497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.893 [2024-05-15 12:44:10.786532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.893 [2024-05-15 12:44:10.786551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:01.893 [2024-05-15 12:44:10.786562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.893 [2024-05-15 12:44:10.786662] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.894 [2024-05-15 12:44:10.786678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.894 [2024-05-15 12:44:10.786691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.894 [2024-05-15 12:44:10.786703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.894 [2024-05-15 12:44:10.786724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.894 [2024-05-15 12:44:10.786745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.894 [2024-05-15 12:44:10.786756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.894 [2024-05-15 12:44:10.786777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.894 [2024-05-15 12:44:10.786787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.894 [2024-05-15 12:44:10.786797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.894 [2024-05-15 12:44:10.786807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.894 [2024-05-15 12:44:10.786820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:01.894 [2024-05-15 12:44:10.786831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.894 [2024-05-15 12:44:10.786851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:01.894 [2024-05-15 12:44:10.786861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:01.894 [2024-05-15 12:44:10.786881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:01.894 [2024-05-15 12:44:10.786904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:01.894 [2024-05-15 12:44:10.786915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.894 [2024-05-15 12:44:10.786925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.894 [2024-05-15 12:44:10.786946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.894 [2024-05-15 12:44:10.786957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.894 [2024-05-15 12:44:10.786977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.894 [2024-05-15 12:44:10.786987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.894 [2024-05-15 12:44:10.786997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.894 [2024-05-15 12:44:10.787006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.894 [2024-05-15 12:44:10.787016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:01.894 [2024-05-15 12:44:10.787026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.894 [2024-05-15 12:44:10.787036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.894 [2024-05-15 12:44:10.787046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.894 [2024-05-15 12:44:10.787056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.894 [2024-05-15 12:44:10.787066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.894 [2024-05-15 12:44:10.787075] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:01.894 [2024-05-15 12:44:10.787085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.894 [2024-05-15 12:44:10.787095] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.894 [2024-05-15 12:44:10.787106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.894 [2024-05-15 12:44:10.787117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.894 [2024-05-15 12:44:10.787127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.894 [2024-05-15 12:44:10.787143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.894 [2024-05-15 12:44:10.787154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.894 [2024-05-15 12:44:10.787165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.894 [2024-05-15 12:44:10.787176] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.894 [2024-05-15 12:44:10.787186] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.894 [2024-05-15 12:44:10.787196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.894 [2024-05-15 12:44:10.787207] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.894 [2024-05-15 12:44:10.787222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.894 [2024-05-15 12:44:10.787234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.894 [2024-05-15 12:44:10.787246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:01.894 [2024-05-15 12:44:10.787257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:01.894 [2024-05-15 12:44:10.787268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:01.894 [2024-05-15 12:44:10.787279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:01.894 [2024-05-15 12:44:10.787290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:01.894 [2024-05-15 12:44:10.787301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:01.894 [2024-05-15 12:44:10.787312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:01.894 [2024-05-15 12:44:10.787323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:01.894 [2024-05-15 12:44:10.787335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:01.894 [2024-05-15 12:44:10.787346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:01.894 [2024-05-15 12:44:10.787358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:01.894 [2024-05-15 12:44:10.787369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:01.894 [2024-05-15 12:44:10.787380] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.894 [2024-05-15 12:44:10.787393] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.894 [2024-05-15 12:44:10.787405] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.894 [2024-05-15 12:44:10.787416] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.894 [2024-05-15 12:44:10.787428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.894 [2024-05-15 12:44:10.787439] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.894 [2024-05-15 12:44:10.787451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.787463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.894 [2024-05-15 12:44:10.787474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:23:01.894 [2024-05-15 12:44:10.787486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.809652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.809714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.894 [2024-05-15 12:44:10.809734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.090 ms 00:23:01.894 [2024-05-15 12:44:10.809747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.809879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.809895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:01.894 [2024-05-15 12:44:10.809914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:01.894 [2024-05-15 12:44:10.809925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.862816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.862896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.894 [2024-05-15 12:44:10.862916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.788 ms 00:23:01.894 [2024-05-15 12:44:10.862928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.863012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.863030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.894 [2024-05-15 12:44:10.863043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:01.894 [2024-05-15 12:44:10.863054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.863671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.863703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.894 [2024-05-15 12:44:10.863718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:01.894 [2024-05-15 12:44:10.863729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.863887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.863905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.894 [2024-05-15 12:44:10.863918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:01.894 [2024-05-15 12:44:10.863929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.883905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.894 [2024-05-15 12:44:10.883948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.894 [2024-05-15 12:44:10.883965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.948 ms 00:23:01.894 [2024-05-15 12:44:10.883977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.894 [2024-05-15 12:44:10.900568] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:01.894 [2024-05-15 12:44:10.900627] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:01.894 [2024-05-15 12:44:10.900644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.895 [2024-05-15 12:44:10.900657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:01.895 [2024-05-15 12:44:10.900671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.525 ms 00:23:01.895 [2024-05-15 12:44:10.900681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:10.929585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:10.929630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.154 [2024-05-15 12:44:10.929646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.858 ms 00:23:02.154 [2024-05-15 12:44:10.929658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:10.944931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:10.944972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.154 [2024-05-15 12:44:10.944989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.224 ms 00:23:02.154 [2024-05-15 12:44:10.945000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:10.960000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:10.960052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.154 [2024-05-15 12:44:10.960068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.955 ms 00:23:02.154 [2024-05-15 12:44:10.960080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:10.960642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:10.960678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.154 [2024-05-15 12:44:10.960694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:23:02.154 [2024-05-15 12:44:10.960706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.039514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.039603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.154 [2024-05-15 12:44:11.039623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.782 ms 00:23:02.154 [2024-05-15 12:44:11.039635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.051984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.154 [2024-05-15 12:44:11.055812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.055866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.154 [2024-05-15 12:44:11.055884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.098 ms 00:23:02.154 [2024-05-15 12:44:11.055896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.056006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.056028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.154 [2024-05-15 12:44:11.056041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:02.154 [2024-05-15 12:44:11.056052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.057675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.057716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.154 [2024-05-15 12:44:11.057730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:23:02.154 [2024-05-15 12:44:11.057741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.059847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.059883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:02.154 [2024-05-15 12:44:11.059901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:23:02.154 [2024-05-15 12:44:11.059913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.059954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.059969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.154 [2024-05-15 12:44:11.059981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:02.154 [2024-05-15 12:44:11.059992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.060052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.154 [2024-05-15 12:44:11.060069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.060080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.154 [2024-05-15 12:44:11.060091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:02.154 [2024-05-15 12:44:11.060107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.091463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.091538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.154 [2024-05-15 12:44:11.091556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.311 ms 00:23:02.154 [2024-05-15 12:44:11.091568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.091652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.154 [2024-05-15 12:44:11.091683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.154 [2024-05-15 12:44:11.091695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:02.154 [2024-05-15 12:44:11.091712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.154 [2024-05-15 12:44:11.099791] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.127 ms, result 0 00:23:41.693  Copying: 23/1024 [MB] (23 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 128/1024 [MB] (26 MBps) Copying: 155/1024 [MB] (26 MBps) Copying: 182/1024 [MB] (26 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 233/1024 [MB] (25 MBps) Copying: 260/1024 [MB] (26 MBps) Copying: 286/1024 [MB] (26 MBps) Copying: 312/1024 [MB] (26 MBps) Copying: 339/1024 [MB] (26 MBps) Copying: 365/1024 [MB] (26 MBps) Copying: 389/1024 [MB] (24 MBps) Copying: 416/1024 [MB] (26 MBps) Copying: 442/1024 [MB] (26 MBps) Copying: 468/1024 [MB] (25 MBps) Copying: 494/1024 [MB] (26 MBps) Copying: 520/1024 [MB] (26 MBps) Copying: 547/1024 [MB] (26 MBps) Copying: 574/1024 [MB] (26 MBps) Copying: 600/1024 [MB] (25 MBps) Copying: 627/1024 [MB] (26 MBps) Copying: 652/1024 [MB] (25 MBps) Copying: 678/1024 [MB] (25 MBps) Copying: 704/1024 [MB] (26 MBps) Copying: 729/1024 [MB] (25 MBps) Copying: 756/1024 [MB] (26 MBps) Copying: 782/1024 [MB] (26 MBps) Copying: 807/1024 [MB] (24 MBps) Copying: 833/1024 [MB] (26 MBps) Copying: 859/1024 [MB] (25 MBps) Copying: 886/1024 [MB] (26 MBps) Copying: 913/1024 [MB] (26 MBps) Copying: 939/1024 [MB] (26 MBps) Copying: 966/1024 [MB] (26 MBps) Copying: 991/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 12:44:50.652227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.652333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.693 [2024-05-15 12:44:50.652358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:41.693 [2024-05-15 12:44:50.652371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.693 [2024-05-15 12:44:50.652412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.693 [2024-05-15 12:44:50.656951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.657009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.693 [2024-05-15 12:44:50.657043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.513 ms 00:23:41.693 [2024-05-15 12:44:50.657054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.693 [2024-05-15 12:44:50.657348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.657370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:41.693 [2024-05-15 12:44:50.657383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:23:41.693 [2024-05-15 12:44:50.657394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.693 [2024-05-15 12:44:50.662152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.662199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:41.693 [2024-05-15 12:44:50.662217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:23:41.693 [2024-05-15 12:44:50.662230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.693 [2024-05-15 12:44:50.668816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.668854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:41.693 [2024-05-15 12:44:50.668870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:23:41.693 [2024-05-15 12:44:50.668881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.693 [2024-05-15 12:44:50.700602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.693 [2024-05-15 12:44:50.700653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:41.693 [2024-05-15 12:44:50.700673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.651 ms 00:23:41.693 [2024-05-15 12:44:50.700685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.718981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.719056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:41.953 [2024-05-15 12:44:50.719092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.239 ms 00:23:41.953 [2024-05-15 12:44:50.719104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.825487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.825580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:41.953 [2024-05-15 12:44:50.825602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.325 ms 00:23:41.953 [2024-05-15 12:44:50.825614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.857599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.857660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:41.953 [2024-05-15 12:44:50.857680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.958 ms 00:23:41.953 [2024-05-15 12:44:50.857693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.888336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.888397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:41.953 [2024-05-15 12:44:50.888433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.585 ms 00:23:41.953 [2024-05-15 12:44:50.888445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.918658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.918727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:41.953 [2024-05-15 12:44:50.918747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.138 ms 00:23:41.953 [2024-05-15 12:44:50.918759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.948721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.953 [2024-05-15 12:44:50.948780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.953 [2024-05-15 12:44:50.948815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.836 ms 00:23:41.953 [2024-05-15 12:44:50.948827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.953 [2024-05-15 12:44:50.948874] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.954 [2024-05-15 12:44:50.948898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:23:41.954 [2024-05-15 12:44:50.948914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.948999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.949989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.954 [2024-05-15 12:44:50.950060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.955 [2024-05-15 12:44:50.950190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.955 [2024-05-15 12:44:50.950203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04fa0f24-bd68-4724-a523-fdd66996a60c 00:23:41.955 [2024-05-15 12:44:50.950215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:23:41.955 [2024-05-15 12:44:50.950227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18368 00:23:41.955 [2024-05-15 12:44:50.950238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17408 00:23:41.955 [2024-05-15 12:44:50.950251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0551 00:23:41.955 [2024-05-15 12:44:50.950262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.955 [2024-05-15 12:44:50.950274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.955 [2024-05-15 12:44:50.950292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.955 [2024-05-15 12:44:50.950303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.955 [2024-05-15 12:44:50.950313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.955 [2024-05-15 12:44:50.950325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.955 [2024-05-15 12:44:50.950338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.955 [2024-05-15 12:44:50.950349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:23:41.955 [2024-05-15 12:44:50.950361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.213 [2024-05-15 12:44:50.967289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.213 [2024-05-15 12:44:50.967330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:42.213 [2024-05-15 12:44:50.967362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.856 ms 00:23:42.213 [2024-05-15 12:44:50.967374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.213 [2024-05-15 12:44:50.967680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.213 [2024-05-15 12:44:50.967699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:42.213 [2024-05-15 12:44:50.967712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:42.213 [2024-05-15 12:44:50.967723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.213 [2024-05-15 12:44:51.014755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.014829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.214 [2024-05-15 12:44:51.014856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.014868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.014956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.014972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.214 [2024-05-15 12:44:51.014985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.014996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.015111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.015131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.214 [2024-05-15 12:44:51.015144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.015163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.015187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.015201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.214 [2024-05-15 12:44:51.015213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.015224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.121487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.121574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.214 [2024-05-15 12:44:51.121602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.121615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.162798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.162873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.214 [2024-05-15 12:44:51.162893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.162905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.214 [2024-05-15 12:44:51.163040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.214 [2024-05-15 12:44:51.163154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.214 [2024-05-15 12:44:51.163327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:42.214 [2024-05-15 12:44:51.163423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.214 [2024-05-15 12:44:51.163545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.214 [2024-05-15 12:44:51.163636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.214 [2024-05-15 12:44:51.163648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.214 [2024-05-15 12:44:51.163660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.214 [2024-05-15 12:44:51.163811] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.553 ms, result 0 00:23:43.586 00:23:43.586 00:23:43.586 12:44:52 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:46.127 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:46.127 12:44:54 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:46.127 12:44:54 -- ftl/restore.sh@85 -- # restore_kill 00:23:46.127 12:44:54 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:46.127 12:44:54 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:46.127 12:44:54 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:46.127 12:44:54 -- ftl/restore.sh@32 -- # killprocess 74698 00:23:46.127 12:44:54 -- common/autotest_common.sh@926 -- # '[' -z 74698 ']' 00:23:46.127 12:44:54 -- common/autotest_common.sh@930 -- # kill -0 74698 00:23:46.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74698) - No such process 00:23:46.128 Process with pid 74698 is not found 00:23:46.128 12:44:54 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74698 is not found' 00:23:46.128 Remove shared memory files 00:23:46.128 12:44:54 -- ftl/restore.sh@33 -- # remove_shm 00:23:46.128 12:44:54 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:46.128 12:44:54 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:46.128 12:44:54 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:46.128 12:44:54 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:46.128 12:44:54 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:46.128 12:44:54 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:46.128 ************************************ 00:23:46.128 END TEST ftl_restore 00:23:46.128 ************************************ 00:23:46.128 00:23:46.128 real 3m13.957s 00:23:46.128 user 2m59.844s 00:23:46.128 sys 0m16.104s 00:23:46.128 12:44:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.128 12:44:54 -- common/autotest_common.sh@10 -- # set +x 00:23:46.128 12:44:54 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:46.128 12:44:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:46.128 12:44:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:46.128 12:44:54 -- common/autotest_common.sh@10 -- # set +x 00:23:46.128 ************************************ 00:23:46.128 START TEST ftl_dirty_shutdown 00:23:46.128 ************************************ 00:23:46.128 12:44:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:46.128 * Looking for test storage... 00:23:46.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:46.128 12:44:54 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:46.128 12:44:54 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.128 12:44:54 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.128 12:44:54 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:46.128 12:44:54 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:46.128 12:44:54 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.128 12:44:54 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:46.128 12:44:54 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:46.128 12:44:54 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.128 12:44:54 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.128 12:44:54 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:46.128 12:44:54 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:46.128 12:44:54 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.128 12:44:54 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.128 12:44:54 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:46.128 12:44:54 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:46.128 12:44:54 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.128 12:44:54 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.128 12:44:54 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:46.128 12:44:54 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:46.128 12:44:54 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.128 12:44:54 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.128 12:44:54 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.128 12:44:54 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.128 12:44:54 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:46.128 12:44:54 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:46.128 12:44:54 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.128 12:44:54 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76762 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:46.128 12:44:54 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76762 00:23:46.128 12:44:54 -- common/autotest_common.sh@819 -- # '[' -z 76762 ']' 00:23:46.128 12:44:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.128 12:44:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:46.128 12:44:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.128 12:44:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:46.128 12:44:54 -- common/autotest_common.sh@10 -- # set +x 00:23:46.128 [2024-05-15 12:44:54.968620] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:46.128 [2024-05-15 12:44:54.969239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76762 ] 00:23:46.385 [2024-05-15 12:44:55.176813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.643 [2024-05-15 12:44:55.444073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:46.643 [2024-05-15 12:44:55.444341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.578 12:44:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:47.578 12:44:56 -- common/autotest_common.sh@852 -- # return 0 00:23:47.578 12:44:56 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:23:47.578 12:44:56 -- ftl/common.sh@54 -- # local name=nvme0 00:23:47.578 12:44:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:23:47.578 12:44:56 -- ftl/common.sh@56 -- # local size=103424 00:23:47.578 12:44:56 -- ftl/common.sh@59 -- # local base_bdev 00:23:47.578 12:44:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:23:48.145 12:44:56 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:48.145 12:44:56 -- ftl/common.sh@62 -- # local base_size 00:23:48.145 12:44:56 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:48.145 12:44:56 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:23:48.145 12:44:56 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:48.145 12:44:56 -- common/autotest_common.sh@1359 -- # local bs 00:23:48.145 12:44:56 -- common/autotest_common.sh@1360 -- # local nb 00:23:48.145 12:44:56 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:48.145 12:44:57 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:48.145 { 00:23:48.145 "name": "nvme0n1", 00:23:48.145 "aliases": [ 00:23:48.145 "22e6292a-bcab-4f17-a613-fddbe431ed53" 00:23:48.145 ], 00:23:48.145 "product_name": "NVMe disk", 00:23:48.145 "block_size": 4096, 00:23:48.145 "num_blocks": 1310720, 00:23:48.145 "uuid": "22e6292a-bcab-4f17-a613-fddbe431ed53", 00:23:48.145 "assigned_rate_limits": { 00:23:48.145 "rw_ios_per_sec": 0, 00:23:48.145 "rw_mbytes_per_sec": 0, 00:23:48.145 "r_mbytes_per_sec": 0, 00:23:48.145 "w_mbytes_per_sec": 0 00:23:48.145 }, 00:23:48.145 "claimed": true, 00:23:48.145 "claim_type": "read_many_write_one", 00:23:48.145 "zoned": false, 00:23:48.145 "supported_io_types": { 00:23:48.145 "read": true, 00:23:48.145 "write": true, 00:23:48.145 "unmap": true, 00:23:48.145 "write_zeroes": true, 00:23:48.145 "flush": true, 00:23:48.145 "reset": true, 00:23:48.145 "compare": true, 00:23:48.145 "compare_and_write": false, 00:23:48.145 "abort": true, 00:23:48.145 "nvme_admin": true, 00:23:48.145 "nvme_io": true 00:23:48.145 }, 00:23:48.145 "driver_specific": { 00:23:48.145 "nvme": [ 00:23:48.145 { 00:23:48.145 "pci_address": "0000:00:07.0", 00:23:48.145 "trid": { 00:23:48.145 "trtype": "PCIe", 00:23:48.145 "traddr": "0000:00:07.0" 00:23:48.145 }, 00:23:48.145 "ctrlr_data": { 00:23:48.145 "cntlid": 0, 00:23:48.145 "vendor_id": "0x1b36", 00:23:48.145 "model_number": "QEMU NVMe Ctrl", 00:23:48.145 "serial_number": "12341", 00:23:48.145 "firmware_revision": "8.0.0", 00:23:48.145 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:48.145 "oacs": { 00:23:48.145 "security": 0, 00:23:48.145 "format": 1, 00:23:48.145 "firmware": 0, 00:23:48.146 "ns_manage": 1 00:23:48.146 }, 00:23:48.146 "multi_ctrlr": false, 00:23:48.146 "ana_reporting": false 00:23:48.146 }, 00:23:48.146 "vs": { 00:23:48.146 "nvme_version": "1.4" 00:23:48.146 }, 00:23:48.146 "ns_data": { 00:23:48.146 "id": 1, 00:23:48.146 "can_share": false 00:23:48.146 } 00:23:48.146 } 00:23:48.146 ], 00:23:48.146 "mp_policy": "active_passive" 00:23:48.146 } 00:23:48.146 } 00:23:48.146 ]' 00:23:48.146 12:44:57 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:48.146 12:44:57 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:48.146 12:44:57 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:48.405 12:44:57 -- common/autotest_common.sh@1363 -- # nb=1310720 00:23:48.405 12:44:57 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:23:48.405 12:44:57 -- common/autotest_common.sh@1367 -- # echo 5120 00:23:48.405 12:44:57 -- ftl/common.sh@63 -- # base_size=5120 00:23:48.405 12:44:57 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:48.405 12:44:57 -- ftl/common.sh@67 -- # clear_lvols 00:23:48.405 12:44:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:48.405 12:44:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:48.405 12:44:57 -- ftl/common.sh@28 -- # stores=532cb311-8699-4381-ad64-f9dce0252a20 00:23:48.405 12:44:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:23:48.405 12:44:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 532cb311-8699-4381-ad64-f9dce0252a20 00:23:48.971 12:44:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:48.971 12:44:57 -- ftl/common.sh@68 -- # lvs=1ea88be1-b9a7-4c6c-8698-2647820af5ce 00:23:48.971 12:44:57 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1ea88be1-b9a7-4c6c-8698-2647820af5ce 00:23:49.229 12:44:58 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.229 12:44:58 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:23:49.229 12:44:58 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.229 12:44:58 -- ftl/common.sh@35 -- # local name=nvc0 00:23:49.229 12:44:58 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:23:49.229 12:44:58 -- ftl/common.sh@37 -- # local base_bdev=3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.229 12:44:58 -- ftl/common.sh@38 -- # local cache_size= 00:23:49.229 12:44:58 -- ftl/common.sh@41 -- # get_bdev_size 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.229 12:44:58 -- common/autotest_common.sh@1357 -- # local bdev_name=3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.229 12:44:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:49.229 12:44:58 -- common/autotest_common.sh@1359 -- # local bs 00:23:49.229 12:44:58 -- common/autotest_common.sh@1360 -- # local nb 00:23:49.229 12:44:58 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:49.487 12:44:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:49.487 { 00:23:49.487 "name": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:49.487 "aliases": [ 00:23:49.487 "lvs/nvme0n1p0" 00:23:49.487 ], 00:23:49.487 "product_name": "Logical Volume", 00:23:49.487 "block_size": 4096, 00:23:49.487 "num_blocks": 26476544, 00:23:49.487 "uuid": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:49.487 "assigned_rate_limits": { 00:23:49.487 "rw_ios_per_sec": 0, 00:23:49.487 "rw_mbytes_per_sec": 0, 00:23:49.487 "r_mbytes_per_sec": 0, 00:23:49.487 "w_mbytes_per_sec": 0 00:23:49.487 }, 00:23:49.487 "claimed": false, 00:23:49.487 "zoned": false, 00:23:49.487 "supported_io_types": { 00:23:49.487 "read": true, 00:23:49.487 "write": true, 00:23:49.487 "unmap": true, 00:23:49.487 "write_zeroes": true, 00:23:49.487 "flush": false, 00:23:49.487 "reset": true, 00:23:49.487 "compare": false, 00:23:49.487 "compare_and_write": false, 00:23:49.487 "abort": false, 00:23:49.487 "nvme_admin": false, 00:23:49.487 "nvme_io": false 00:23:49.487 }, 00:23:49.487 "driver_specific": { 00:23:49.487 "lvol": { 00:23:49.487 "lvol_store_uuid": "1ea88be1-b9a7-4c6c-8698-2647820af5ce", 00:23:49.487 "base_bdev": "nvme0n1", 00:23:49.487 "thin_provision": true, 00:23:49.487 "snapshot": false, 00:23:49.487 "clone": false, 00:23:49.487 "esnap_clone": false 00:23:49.487 } 00:23:49.487 } 00:23:49.487 } 00:23:49.487 ]' 00:23:49.487 12:44:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:49.746 12:44:58 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:49.746 12:44:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:49.746 12:44:58 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:49.746 12:44:58 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:49.746 12:44:58 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:49.746 12:44:58 -- ftl/common.sh@41 -- # local base_size=5171 00:23:49.746 12:44:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:23:49.746 12:44:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:23:50.004 12:44:58 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:50.004 12:44:58 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:50.004 12:44:58 -- ftl/common.sh@48 -- # get_bdev_size 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.004 12:44:58 -- common/autotest_common.sh@1357 -- # local bdev_name=3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.004 12:44:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:50.004 12:44:58 -- common/autotest_common.sh@1359 -- # local bs 00:23:50.004 12:44:58 -- common/autotest_common.sh@1360 -- # local nb 00:23:50.005 12:44:58 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.262 12:44:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:50.262 { 00:23:50.262 "name": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:50.262 "aliases": [ 00:23:50.262 "lvs/nvme0n1p0" 00:23:50.262 ], 00:23:50.262 "product_name": "Logical Volume", 00:23:50.262 "block_size": 4096, 00:23:50.262 "num_blocks": 26476544, 00:23:50.262 "uuid": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:50.262 "assigned_rate_limits": { 00:23:50.262 "rw_ios_per_sec": 0, 00:23:50.262 "rw_mbytes_per_sec": 0, 00:23:50.262 "r_mbytes_per_sec": 0, 00:23:50.262 "w_mbytes_per_sec": 0 00:23:50.262 }, 00:23:50.262 "claimed": false, 00:23:50.262 "zoned": false, 00:23:50.262 "supported_io_types": { 00:23:50.262 "read": true, 00:23:50.262 "write": true, 00:23:50.262 "unmap": true, 00:23:50.262 "write_zeroes": true, 00:23:50.262 "flush": false, 00:23:50.262 "reset": true, 00:23:50.262 "compare": false, 00:23:50.262 "compare_and_write": false, 00:23:50.262 "abort": false, 00:23:50.262 "nvme_admin": false, 00:23:50.262 "nvme_io": false 00:23:50.262 }, 00:23:50.262 "driver_specific": { 00:23:50.262 "lvol": { 00:23:50.262 "lvol_store_uuid": "1ea88be1-b9a7-4c6c-8698-2647820af5ce", 00:23:50.262 "base_bdev": "nvme0n1", 00:23:50.262 "thin_provision": true, 00:23:50.262 "snapshot": false, 00:23:50.262 "clone": false, 00:23:50.262 "esnap_clone": false 00:23:50.262 } 00:23:50.262 } 00:23:50.262 } 00:23:50.262 ]' 00:23:50.262 12:44:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:50.262 12:44:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:50.262 12:44:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:50.262 12:44:59 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:50.262 12:44:59 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:50.262 12:44:59 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:50.262 12:44:59 -- ftl/common.sh@48 -- # cache_size=5171 00:23:50.262 12:44:59 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:50.520 12:44:59 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:50.520 12:44:59 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.520 12:44:59 -- common/autotest_common.sh@1357 -- # local bdev_name=3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.520 12:44:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:50.520 12:44:59 -- common/autotest_common.sh@1359 -- # local bs 00:23:50.520 12:44:59 -- common/autotest_common.sh@1360 -- # local nb 00:23:50.520 12:44:59 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 00:23:50.778 12:44:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:50.778 { 00:23:50.778 "name": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:50.778 "aliases": [ 00:23:50.778 "lvs/nvme0n1p0" 00:23:50.778 ], 00:23:50.778 "product_name": "Logical Volume", 00:23:50.778 "block_size": 4096, 00:23:50.778 "num_blocks": 26476544, 00:23:50.778 "uuid": "3022c3e1-f70e-45df-88c1-b45fe0ad67f0", 00:23:50.778 "assigned_rate_limits": { 00:23:50.778 "rw_ios_per_sec": 0, 00:23:50.778 "rw_mbytes_per_sec": 0, 00:23:50.778 "r_mbytes_per_sec": 0, 00:23:50.778 "w_mbytes_per_sec": 0 00:23:50.778 }, 00:23:50.778 "claimed": false, 00:23:50.778 "zoned": false, 00:23:50.778 "supported_io_types": { 00:23:50.778 "read": true, 00:23:50.778 "write": true, 00:23:50.778 "unmap": true, 00:23:50.778 "write_zeroes": true, 00:23:50.778 "flush": false, 00:23:50.778 "reset": true, 00:23:50.778 "compare": false, 00:23:50.778 "compare_and_write": false, 00:23:50.778 "abort": false, 00:23:50.778 "nvme_admin": false, 00:23:50.779 "nvme_io": false 00:23:50.779 }, 00:23:50.779 "driver_specific": { 00:23:50.779 "lvol": { 00:23:50.779 "lvol_store_uuid": "1ea88be1-b9a7-4c6c-8698-2647820af5ce", 00:23:50.779 "base_bdev": "nvme0n1", 00:23:50.779 "thin_provision": true, 00:23:50.779 "snapshot": false, 00:23:50.779 "clone": false, 00:23:50.779 "esnap_clone": false 00:23:50.779 } 00:23:50.779 } 00:23:50.779 } 00:23:50.779 ]' 00:23:50.779 12:44:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:50.779 12:44:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:50.779 12:44:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:50.779 12:44:59 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:50.779 12:44:59 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:50.779 12:44:59 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 --l2p_dram_limit 10' 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:50.779 12:44:59 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3022c3e1-f70e-45df-88c1-b45fe0ad67f0 --l2p_dram_limit 10 -c nvc0n1p0 00:23:51.037 [2024-05-15 12:45:00.026988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.027065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:51.037 [2024-05-15 12:45:00.027114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:51.037 [2024-05-15 12:45:00.027131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.027226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.027248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.037 [2024-05-15 12:45:00.027268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:51.037 [2024-05-15 12:45:00.027282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.027321] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:51.037 [2024-05-15 12:45:00.028376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:51.037 [2024-05-15 12:45:00.028437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.028457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.037 [2024-05-15 12:45:00.028475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:23:51.037 [2024-05-15 12:45:00.028502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.028656] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 93cea669-6017-42ed-bc97-87991307ce19 00:23:51.037 [2024-05-15 12:45:00.030525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.030570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:51.037 [2024-05-15 12:45:00.030591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:51.037 [2024-05-15 12:45:00.030609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.040388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.040461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.037 [2024-05-15 12:45:00.040488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:23:51.037 [2024-05-15 12:45:00.040529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.040720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.040751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.037 [2024-05-15 12:45:00.040769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:51.037 [2024-05-15 12:45:00.040791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.040875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.037 [2024-05-15 12:45:00.040901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:51.037 [2024-05-15 12:45:00.040918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:51.037 [2024-05-15 12:45:00.040935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.037 [2024-05-15 12:45:00.040985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:51.296 [2024-05-15 12:45:00.046213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.296 [2024-05-15 12:45:00.046263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.296 [2024-05-15 12:45:00.046288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:23:51.296 [2024-05-15 12:45:00.046303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.296 [2024-05-15 12:45:00.046360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.296 [2024-05-15 12:45:00.046379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:51.296 [2024-05-15 12:45:00.046398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:51.296 [2024-05-15 12:45:00.046412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.296 [2024-05-15 12:45:00.046466] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:51.296 [2024-05-15 12:45:00.046631] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:51.296 [2024-05-15 12:45:00.046663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:51.296 [2024-05-15 12:45:00.046683] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:51.296 [2024-05-15 12:45:00.046705] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:51.296 [2024-05-15 12:45:00.046722] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:51.296 [2024-05-15 12:45:00.046740] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:51.296 [2024-05-15 12:45:00.046764] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:51.296 [2024-05-15 12:45:00.046782] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:51.296 [2024-05-15 12:45:00.046801] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:51.296 [2024-05-15 12:45:00.046820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.296 [2024-05-15 12:45:00.046835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:51.296 [2024-05-15 12:45:00.046870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:23:51.296 [2024-05-15 12:45:00.046885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.296 [2024-05-15 12:45:00.046971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.296 [2024-05-15 12:45:00.046990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:51.296 [2024-05-15 12:45:00.047008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:51.296 [2024-05-15 12:45:00.047023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.296 [2024-05-15 12:45:00.047122] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:51.296 [2024-05-15 12:45:00.047146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:51.296 [2024-05-15 12:45:00.047163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.296 [2024-05-15 12:45:00.047178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:51.296 [2024-05-15 12:45:00.047209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:51.296 [2024-05-15 12:45:00.047239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:51.296 [2024-05-15 12:45:00.047255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.296 [2024-05-15 12:45:00.047285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:51.296 [2024-05-15 12:45:00.047299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:51.296 [2024-05-15 12:45:00.047318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.296 [2024-05-15 12:45:00.047333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:51.296 [2024-05-15 12:45:00.047348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:51.296 [2024-05-15 12:45:00.047362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:51.296 [2024-05-15 12:45:00.047394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:51.296 [2024-05-15 12:45:00.047410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:51.296 [2024-05-15 12:45:00.047440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:51.296 [2024-05-15 12:45:00.047454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:51.296 [2024-05-15 12:45:00.047470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:51.296 [2024-05-15 12:45:00.047484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:51.296 [2024-05-15 12:45:00.047547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:51.296 [2024-05-15 12:45:00.047565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:51.296 [2024-05-15 12:45:00.047579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:51.296 [2024-05-15 12:45:00.047595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:51.297 [2024-05-15 12:45:00.047608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:51.297 [2024-05-15 12:45:00.047625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:51.297 [2024-05-15 12:45:00.047638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:51.297 [2024-05-15 12:45:00.047657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:51.297 [2024-05-15 12:45:00.047671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:51.297 [2024-05-15 12:45:00.047687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:51.297 [2024-05-15 12:45:00.047700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:51.297 [2024-05-15 12:45:00.047719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.297 [2024-05-15 12:45:00.047733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:51.297 [2024-05-15 12:45:00.047750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:51.297 [2024-05-15 12:45:00.047764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.297 [2024-05-15 12:45:00.047779] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:51.297 [2024-05-15 12:45:00.047795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:51.297 [2024-05-15 12:45:00.047811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.297 [2024-05-15 12:45:00.047826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.297 [2024-05-15 12:45:00.047843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:51.297 [2024-05-15 12:45:00.047857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:51.297 [2024-05-15 12:45:00.047873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:51.297 [2024-05-15 12:45:00.047888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:51.297 [2024-05-15 12:45:00.047906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:51.297 [2024-05-15 12:45:00.047920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:51.297 [2024-05-15 12:45:00.047937] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:51.297 [2024-05-15 12:45:00.047954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.297 [2024-05-15 12:45:00.047973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:51.297 [2024-05-15 12:45:00.047988] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:51.297 [2024-05-15 12:45:00.048005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:51.297 [2024-05-15 12:45:00.048019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:51.297 [2024-05-15 12:45:00.048036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:51.297 [2024-05-15 12:45:00.048052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:51.297 [2024-05-15 12:45:00.048070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:51.297 [2024-05-15 12:45:00.048084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:51.297 [2024-05-15 12:45:00.048101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:51.297 [2024-05-15 12:45:00.048115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:51.297 [2024-05-15 12:45:00.048132] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:51.297 [2024-05-15 12:45:00.048147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:51.297 [2024-05-15 12:45:00.048169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:51.297 [2024-05-15 12:45:00.048184] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:51.297 [2024-05-15 12:45:00.048208] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.297 [2024-05-15 12:45:00.048223] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:51.297 [2024-05-15 12:45:00.048242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:51.297 [2024-05-15 12:45:00.048258] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:51.297 [2024-05-15 12:45:00.048276] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:51.297 [2024-05-15 12:45:00.048292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.048310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:51.297 [2024-05-15 12:45:00.048325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:23:51.297 [2024-05-15 12:45:00.048342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.070720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.070799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.297 [2024-05-15 12:45:00.070825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.306 ms 00:23:51.297 [2024-05-15 12:45:00.070843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.070978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.071004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:51.297 [2024-05-15 12:45:00.071021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:51.297 [2024-05-15 12:45:00.071038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.114863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.114946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.297 [2024-05-15 12:45:00.114971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.731 ms 00:23:51.297 [2024-05-15 12:45:00.114990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.115058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.115087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.297 [2024-05-15 12:45:00.115103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:51.297 [2024-05-15 12:45:00.115121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.115798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.115836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.297 [2024-05-15 12:45:00.115859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:23:51.297 [2024-05-15 12:45:00.115877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.116034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.116071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.297 [2024-05-15 12:45:00.116099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:23:51.297 [2024-05-15 12:45:00.116117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.138156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.138243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.297 [2024-05-15 12:45:00.138267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.004 ms 00:23:51.297 [2024-05-15 12:45:00.138285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.152860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:51.297 [2024-05-15 12:45:00.157027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.157069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:51.297 [2024-05-15 12:45:00.157119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.587 ms 00:23:51.297 [2024-05-15 12:45:00.157134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.225989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.297 [2024-05-15 12:45:00.226072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:51.297 [2024-05-15 12:45:00.226125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.799 ms 00:23:51.297 [2024-05-15 12:45:00.226141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.297 [2024-05-15 12:45:00.226215] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:23:51.297 [2024-05-15 12:45:00.226240] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:23:53.869 [2024-05-15 12:45:02.731394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.731480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:53.869 [2024-05-15 12:45:02.731576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2505.183 ms 00:23:53.869 [2024-05-15 12:45:02.731596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.869 [2024-05-15 12:45:02.731845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.731867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:53.869 [2024-05-15 12:45:02.731899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:23:53.869 [2024-05-15 12:45:02.731915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.869 [2024-05-15 12:45:02.762839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.762921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:53.869 [2024-05-15 12:45:02.762947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.845 ms 00:23:53.869 [2024-05-15 12:45:02.762962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.869 [2024-05-15 12:45:02.793226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.793296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:53.869 [2024-05-15 12:45:02.793344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.198 ms 00:23:53.869 [2024-05-15 12:45:02.793360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.869 [2024-05-15 12:45:02.793878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.793914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:53.869 [2024-05-15 12:45:02.793937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:23:53.869 [2024-05-15 12:45:02.793952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.869 [2024-05-15 12:45:02.871769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.869 [2024-05-15 12:45:02.871842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:53.869 [2024-05-15 12:45:02.871872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.737 ms 00:23:53.869 [2024-05-15 12:45:02.871888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.904840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.128 [2024-05-15 12:45:02.904918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:54.128 [2024-05-15 12:45:02.904947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.884 ms 00:23:54.128 [2024-05-15 12:45:02.904967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.907527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.128 [2024-05-15 12:45:02.907573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:54.128 [2024-05-15 12:45:02.907600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:23:54.128 [2024-05-15 12:45:02.907615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.938652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.128 [2024-05-15 12:45:02.938705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:54.128 [2024-05-15 12:45:02.938731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.949 ms 00:23:54.128 [2024-05-15 12:45:02.938747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.938823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.128 [2024-05-15 12:45:02.938848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:54.128 [2024-05-15 12:45:02.938867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:54.128 [2024-05-15 12:45:02.938882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.939017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.128 [2024-05-15 12:45:02.939040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:54.128 [2024-05-15 12:45:02.939060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:54.128 [2024-05-15 12:45:02.939078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.128 [2024-05-15 12:45:02.940404] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2912.889 ms, result 0 00:23:54.128 { 00:23:54.128 "name": "ftl0", 00:23:54.128 "uuid": "93cea669-6017-42ed-bc97-87991307ce19" 00:23:54.128 } 00:23:54.128 12:45:02 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:54.128 12:45:02 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:54.386 12:45:03 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:54.386 12:45:03 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:54.386 12:45:03 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:54.644 /dev/nbd0 00:23:54.644 12:45:03 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:54.644 12:45:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:54.644 12:45:03 -- common/autotest_common.sh@857 -- # local i 00:23:54.644 12:45:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:54.644 12:45:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:54.644 12:45:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:54.644 12:45:03 -- common/autotest_common.sh@861 -- # break 00:23:54.644 12:45:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:54.644 12:45:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:54.644 12:45:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:54.644 1+0 records in 00:23:54.644 1+0 records out 00:23:54.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371075 s, 11.0 MB/s 00:23:54.644 12:45:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:54.644 12:45:03 -- common/autotest_common.sh@874 -- # size=4096 00:23:54.644 12:45:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:54.644 12:45:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:54.644 12:45:03 -- common/autotest_common.sh@877 -- # return 0 00:23:54.644 12:45:03 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:54.901 [2024-05-15 12:45:03.661156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:54.901 [2024-05-15 12:45:03.661388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76906 ] 00:23:54.901 [2024-05-15 12:45:03.838068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.159 [2024-05-15 12:45:04.110537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.265  Copying: 161/1024 [MB] (161 MBps) Copying: 323/1024 [MB] (162 MBps) Copying: 486/1024 [MB] (162 MBps) Copying: 649/1024 [MB] (162 MBps) Copying: 810/1024 [MB] (161 MBps) Copying: 972/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 161 MBps) 00:24:03.265 00:24:03.265 12:45:11 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:05.166 12:45:14 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:05.166 [2024-05-15 12:45:14.159331] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:05.166 [2024-05-15 12:45:14.159470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77020 ] 00:24:05.424 [2024-05-15 12:45:14.321703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.683 [2024-05-15 12:45:14.565022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.161  Copying: 13/1024 [MB] (13 MBps) Copying: 28/1024 [MB] (15 MBps) Copying: 44/1024 [MB] (15 MBps) Copying: 60/1024 [MB] (16 MBps) Copying: 76/1024 [MB] (16 MBps) Copying: 93/1024 [MB] (16 MBps) Copying: 109/1024 [MB] (16 MBps) Copying: 125/1024 [MB] (16 MBps) Copying: 141/1024 [MB] (15 MBps) Copying: 157/1024 [MB] (16 MBps) Copying: 174/1024 [MB] (16 MBps) Copying: 190/1024 [MB] (16 MBps) Copying: 207/1024 [MB] (16 MBps) Copying: 222/1024 [MB] (15 MBps) Copying: 239/1024 [MB] (16 MBps) Copying: 255/1024 [MB] (16 MBps) Copying: 270/1024 [MB] (15 MBps) Copying: 286/1024 [MB] (16 MBps) Copying: 302/1024 [MB] (15 MBps) Copying: 318/1024 [MB] (15 MBps) Copying: 335/1024 [MB] (16 MBps) Copying: 351/1024 [MB] (16 MBps) Copying: 367/1024 [MB] (15 MBps) Copying: 383/1024 [MB] (16 MBps) Copying: 399/1024 [MB] (16 MBps) Copying: 415/1024 [MB] (16 MBps) Copying: 431/1024 [MB] (15 MBps) Copying: 446/1024 [MB] (15 MBps) Copying: 462/1024 [MB] (15 MBps) Copying: 478/1024 [MB] (15 MBps) Copying: 493/1024 [MB] (15 MBps) Copying: 509/1024 [MB] (15 MBps) Copying: 525/1024 [MB] (15 MBps) Copying: 541/1024 [MB] (15 MBps) Copying: 557/1024 [MB] (16 MBps) Copying: 573/1024 [MB] (16 MBps) Copying: 590/1024 [MB] (16 MBps) Copying: 606/1024 [MB] (16 MBps) Copying: 622/1024 [MB] (16 MBps) Copying: 638/1024 [MB] (15 MBps) Copying: 654/1024 [MB] (15 MBps) Copying: 670/1024 [MB] (15 MBps) Copying: 686/1024 [MB] (16 MBps) Copying: 701/1024 [MB] (14 MBps) Copying: 715/1024 [MB] (14 MBps) Copying: 730/1024 [MB] (15 MBps) Copying: 745/1024 [MB] (14 MBps) Copying: 760/1024 [MB] (14 MBps) Copying: 774/1024 [MB] (14 MBps) Copying: 789/1024 [MB] (14 MBps) Copying: 804/1024 [MB] (15 MBps) Copying: 819/1024 [MB] (15 MBps) Copying: 834/1024 [MB] (14 MBps) Copying: 849/1024 [MB] (14 MBps) Copying: 863/1024 [MB] (14 MBps) Copying: 877/1024 [MB] (14 MBps) Copying: 892/1024 [MB] (14 MBps) Copying: 906/1024 [MB] (14 MBps) Copying: 921/1024 [MB] (14 MBps) Copying: 935/1024 [MB] (14 MBps) Copying: 950/1024 [MB] (14 MBps) Copying: 964/1024 [MB] (14 MBps) Copying: 979/1024 [MB] (14 MBps) Copying: 993/1024 [MB] (14 MBps) Copying: 1008/1024 [MB] (14 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:25:13.161 00:25:13.161 12:46:22 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:13.161 12:46:22 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:13.419 12:46:22 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:13.678 [2024-05-15 12:46:22.638938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.678 [2024-05-15 12:46:22.639017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:13.678 [2024-05-15 12:46:22.639045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:13.678 [2024-05-15 12:46:22.639064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.678 [2024-05-15 12:46:22.639141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.678 [2024-05-15 12:46:22.642983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.678 [2024-05-15 12:46:22.643038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:13.678 [2024-05-15 12:46:22.643079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.804 ms 00:25:13.678 [2024-05-15 12:46:22.643094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.678 [2024-05-15 12:46:22.645012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.678 [2024-05-15 12:46:22.645054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:13.678 [2024-05-15 12:46:22.645096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:25:13.678 [2024-05-15 12:46:22.645110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.678 [2024-05-15 12:46:22.662041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.678 [2024-05-15 12:46:22.662214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:13.678 [2024-05-15 12:46:22.662392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:25:13.678 [2024-05-15 12:46:22.662552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.679 [2024-05-15 12:46:22.669170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.679 [2024-05-15 12:46:22.669212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:13.679 [2024-05-15 12:46:22.669251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.430 ms 00:25:13.679 [2024-05-15 12:46:22.669266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.699676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.699735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:13.939 [2024-05-15 12:46:22.699762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.287 ms 00:25:13.939 [2024-05-15 12:46:22.699778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.718966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.719048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:13.939 [2024-05-15 12:46:22.719095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.118 ms 00:25:13.939 [2024-05-15 12:46:22.719116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.719392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.719417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:13.939 [2024-05-15 12:46:22.719437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:25:13.939 [2024-05-15 12:46:22.719452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.750535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.750585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:13.939 [2024-05-15 12:46:22.750612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.016 ms 00:25:13.939 [2024-05-15 12:46:22.750627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.780813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.780875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:13.939 [2024-05-15 12:46:22.780901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.122 ms 00:25:13.939 [2024-05-15 12:46:22.780918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.810116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.810172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:13.939 [2024-05-15 12:46:22.810200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.128 ms 00:25:13.939 [2024-05-15 12:46:22.810215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.839762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.939 [2024-05-15 12:46:22.839817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:13.939 [2024-05-15 12:46:22.839854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.403 ms 00:25:13.939 [2024-05-15 12:46:22.839871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.939 [2024-05-15 12:46:22.839937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:13.939 [2024-05-15 12:46:22.839982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:13.939 [2024-05-15 12:46:22.840864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.840977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:13.940 [2024-05-15 12:46:22.841748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:13.940 [2024-05-15 12:46:22.841766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93cea669-6017-42ed-bc97-87991307ce19 00:25:13.940 [2024-05-15 12:46:22.841781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:13.940 [2024-05-15 12:46:22.841797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:13.940 [2024-05-15 12:46:22.841811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:13.940 [2024-05-15 12:46:22.841833] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:13.940 [2024-05-15 12:46:22.841846] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:13.940 [2024-05-15 12:46:22.841863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:13.940 [2024-05-15 12:46:22.841876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:13.940 [2024-05-15 12:46:22.841891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:13.940 [2024-05-15 12:46:22.841904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:13.940 [2024-05-15 12:46:22.841924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.940 [2024-05-15 12:46:22.841938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:13.940 [2024-05-15 12:46:22.841956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:25:13.940 [2024-05-15 12:46:22.841971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.858455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.940 [2024-05-15 12:46:22.858582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:13.940 [2024-05-15 12:46:22.858631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.403 ms 00:25:13.940 [2024-05-15 12:46:22.858648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.858937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.940 [2024-05-15 12:46:22.858958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:13.940 [2024-05-15 12:46:22.858977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:25:13.940 [2024-05-15 12:46:22.858992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.914612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.940 [2024-05-15 12:46:22.914693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.940 [2024-05-15 12:46:22.914737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.940 [2024-05-15 12:46:22.914752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.914855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.940 [2024-05-15 12:46:22.914874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.940 [2024-05-15 12:46:22.914892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.940 [2024-05-15 12:46:22.914923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.915061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.940 [2024-05-15 12:46:22.915083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.940 [2024-05-15 12:46:22.915105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.940 [2024-05-15 12:46:22.915119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.940 [2024-05-15 12:46:22.915153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.940 [2024-05-15 12:46:22.915170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.940 [2024-05-15 12:46:22.915186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.940 [2024-05-15 12:46:22.915199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.016468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.016581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:14.199 [2024-05-15 12:46:23.016626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.016641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.056753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.056814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:14.199 [2024-05-15 12:46:23.056841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.056856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:14.199 [2024-05-15 12:46:23.057064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:14.199 [2024-05-15 12:46:23.057194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:14.199 [2024-05-15 12:46:23.057386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:14.199 [2024-05-15 12:46:23.057576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:14.199 [2024-05-15 12:46:23.057692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.057782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:14.199 [2024-05-15 12:46:23.057803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:14.199 [2024-05-15 12:46:23.057821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:14.199 [2024-05-15 12:46:23.057835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.199 [2024-05-15 12:46:23.058047] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.057 ms, result 0 00:25:14.199 true 00:25:14.199 12:46:23 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76762 00:25:14.199 12:46:23 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76762 00:25:14.199 12:46:23 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:14.199 [2024-05-15 12:46:23.170040] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:14.199 [2024-05-15 12:46:23.170193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77713 ] 00:25:14.456 [2024-05-15 12:46:23.334492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.714 [2024-05-15 12:46:23.575922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.832  Copying: 160/1024 [MB] (160 MBps) Copying: 316/1024 [MB] (155 MBps) Copying: 468/1024 [MB] (152 MBps) Copying: 621/1024 [MB] (153 MBps) Copying: 770/1024 [MB] (148 MBps) Copying: 918/1024 [MB] (147 MBps) Copying: 1024/1024 [MB] (average 152 MBps) 00:25:22.832 00:25:22.832 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76762 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:22.832 12:46:31 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:23.091 [2024-05-15 12:46:31.905459] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:23.091 [2024-05-15 12:46:31.905692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77799 ] 00:25:23.091 [2024-05-15 12:46:32.078849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.349 [2024-05-15 12:46:32.319470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.916 [2024-05-15 12:46:32.670444] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.916 [2024-05-15 12:46:32.670590] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.916 [2024-05-15 12:46:32.735979] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:23.916 [2024-05-15 12:46:32.736521] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:23.916 [2024-05-15 12:46:32.736771] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:24.176 [2024-05-15 12:46:32.985090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.176 [2024-05-15 12:46:32.985175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:24.176 [2024-05-15 12:46:32.985215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:24.176 [2024-05-15 12:46:32.985230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.176 [2024-05-15 12:46:32.985349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.176 [2024-05-15 12:46:32.985374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.176 [2024-05-15 12:46:32.985389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:24.176 [2024-05-15 12:46:32.985402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.176 [2024-05-15 12:46:32.985449] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:24.176 [2024-05-15 12:46:32.986528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:24.176 [2024-05-15 12:46:32.986560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.176 [2024-05-15 12:46:32.986575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.176 [2024-05-15 12:46:32.986596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:25:24.176 [2024-05-15 12:46:32.986609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:32.988623] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:24.177 [2024-05-15 12:46:33.005072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.005119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:24.177 [2024-05-15 12:46:33.005155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.450 ms 00:25:24.177 [2024-05-15 12:46:33.005169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.005246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.005269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:24.177 [2024-05-15 12:46:33.005282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:24.177 [2024-05-15 12:46:33.005301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.014346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.014393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.177 [2024-05-15 12:46:33.014429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.941 ms 00:25:24.177 [2024-05-15 12:46:33.014442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.014627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.014653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.177 [2024-05-15 12:46:33.014674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:25:24.177 [2024-05-15 12:46:33.014688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.014755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.014777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:24.177 [2024-05-15 12:46:33.014792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:24.177 [2024-05-15 12:46:33.014805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.014855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:24.177 [2024-05-15 12:46:33.019795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.019837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.177 [2024-05-15 12:46:33.019873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:25:24.177 [2024-05-15 12:46:33.019886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.019934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.019954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:24.177 [2024-05-15 12:46:33.019974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:24.177 [2024-05-15 12:46:33.020003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.020076] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:24.177 [2024-05-15 12:46:33.020115] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:24.177 [2024-05-15 12:46:33.020158] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:24.177 [2024-05-15 12:46:33.020179] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:24.177 [2024-05-15 12:46:33.020257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:24.177 [2024-05-15 12:46:33.020281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:24.177 [2024-05-15 12:46:33.020297] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:24.177 [2024-05-15 12:46:33.020313] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:24.177 [2024-05-15 12:46:33.020328] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:24.177 [2024-05-15 12:46:33.020343] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:24.177 [2024-05-15 12:46:33.020355] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:24.177 [2024-05-15 12:46:33.020367] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:24.177 [2024-05-15 12:46:33.020379] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:24.177 [2024-05-15 12:46:33.020393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.020406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:24.177 [2024-05-15 12:46:33.020419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:25:24.177 [2024-05-15 12:46:33.020437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.020516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.177 [2024-05-15 12:46:33.020578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:24.177 [2024-05-15 12:46:33.020593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:24.177 [2024-05-15 12:46:33.020607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.177 [2024-05-15 12:46:33.020720] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:24.177 [2024-05-15 12:46:33.020742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:24.177 [2024-05-15 12:46:33.020756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.177 [2024-05-15 12:46:33.020770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.177 [2024-05-15 12:46:33.020790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:24.177 [2024-05-15 12:46:33.020803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:24.177 [2024-05-15 12:46:33.020815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:24.177 [2024-05-15 12:46:33.020827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:24.177 [2024-05-15 12:46:33.020839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:24.177 [2024-05-15 12:46:33.020851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.177 [2024-05-15 12:46:33.020863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:24.177 [2024-05-15 12:46:33.020875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:24.177 [2024-05-15 12:46:33.020886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.177 [2024-05-15 12:46:33.020898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:24.177 [2024-05-15 12:46:33.020910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:24.177 [2024-05-15 12:46:33.020930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.177 [2024-05-15 12:46:33.020943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:24.177 [2024-05-15 12:46:33.020971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:24.177 [2024-05-15 12:46:33.020984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.177 [2024-05-15 12:46:33.020997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:24.177 [2024-05-15 12:46:33.021009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:24.177 [2024-05-15 12:46:33.021022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:24.177 [2024-05-15 12:46:33.021034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:24.177 [2024-05-15 12:46:33.021046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:24.177 [2024-05-15 12:46:33.021059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:24.177 [2024-05-15 12:46:33.021071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:24.178 [2024-05-15 12:46:33.021083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:24.178 [2024-05-15 12:46:33.021095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:24.178 [2024-05-15 12:46:33.021107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:24.178 [2024-05-15 12:46:33.021119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:24.178 [2024-05-15 12:46:33.021131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:24.178 [2024-05-15 12:46:33.021143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:24.178 [2024-05-15 12:46:33.021156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:24.178 [2024-05-15 12:46:33.021168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:24.178 [2024-05-15 12:46:33.021180] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:24.178 [2024-05-15 12:46:33.021192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:24.178 [2024-05-15 12:46:33.021205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.178 [2024-05-15 12:46:33.021217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:24.178 [2024-05-15 12:46:33.021229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:24.178 [2024-05-15 12:46:33.021241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.178 [2024-05-15 12:46:33.021252] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:24.178 [2024-05-15 12:46:33.021266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:24.178 [2024-05-15 12:46:33.021279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.178 [2024-05-15 12:46:33.021292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.178 [2024-05-15 12:46:33.021305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:24.178 [2024-05-15 12:46:33.021318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:24.178 [2024-05-15 12:46:33.021330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:24.178 [2024-05-15 12:46:33.021342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:24.178 [2024-05-15 12:46:33.021355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:24.178 [2024-05-15 12:46:33.021368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:24.178 [2024-05-15 12:46:33.021382] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:24.178 [2024-05-15 12:46:33.021399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.178 [2024-05-15 12:46:33.021414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:24.178 [2024-05-15 12:46:33.021427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:24.178 [2024-05-15 12:46:33.021441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:24.178 [2024-05-15 12:46:33.021454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:24.178 [2024-05-15 12:46:33.021467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:24.178 [2024-05-15 12:46:33.021481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:24.178 [2024-05-15 12:46:33.021494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:24.178 [2024-05-15 12:46:33.021537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:24.178 [2024-05-15 12:46:33.021552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:24.178 [2024-05-15 12:46:33.021566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:24.178 [2024-05-15 12:46:33.021579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:24.178 [2024-05-15 12:46:33.021593] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:24.178 [2024-05-15 12:46:33.021607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:24.178 [2024-05-15 12:46:33.021620] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:24.178 [2024-05-15 12:46:33.021635] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.178 [2024-05-15 12:46:33.021649] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.178 [2024-05-15 12:46:33.021662] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:24.178 [2024-05-15 12:46:33.021676] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:24.178 [2024-05-15 12:46:33.021689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:24.178 [2024-05-15 12:46:33.021704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.021717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:24.178 [2024-05-15 12:46:33.021739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:25:24.178 [2024-05-15 12:46:33.021752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.043863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.043938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.178 [2024-05-15 12:46:33.043985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.043 ms 00:25:24.178 [2024-05-15 12:46:33.044015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.044145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.044165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:24.178 [2024-05-15 12:46:33.044179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:24.178 [2024-05-15 12:46:33.044192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.093973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.094052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.178 [2024-05-15 12:46:33.094093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.677 ms 00:25:24.178 [2024-05-15 12:46:33.094107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.094202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.094223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.178 [2024-05-15 12:46:33.094238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:24.178 [2024-05-15 12:46:33.094251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.094941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.094991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.178 [2024-05-15 12:46:33.095010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:25:24.178 [2024-05-15 12:46:33.095024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.178 [2024-05-15 12:46:33.095190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.178 [2024-05-15 12:46:33.095214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.179 [2024-05-15 12:46:33.095229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:24.179 [2024-05-15 12:46:33.095242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.179 [2024-05-15 12:46:33.114914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.179 [2024-05-15 12:46:33.114962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.179 [2024-05-15 12:46:33.114999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.637 ms 00:25:24.179 [2024-05-15 12:46:33.115012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.179 [2024-05-15 12:46:33.131382] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:24.179 [2024-05-15 12:46:33.131440] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:24.179 [2024-05-15 12:46:33.131484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.179 [2024-05-15 12:46:33.131499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:24.179 [2024-05-15 12:46:33.131550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.305 ms 00:25:24.179 [2024-05-15 12:46:33.131566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.179 [2024-05-15 12:46:33.160406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.179 [2024-05-15 12:46:33.160550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:24.179 [2024-05-15 12:46:33.160610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.738 ms 00:25:24.179 [2024-05-15 12:46:33.160625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.179 [2024-05-15 12:46:33.175937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.179 [2024-05-15 12:46:33.175982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:24.179 [2024-05-15 12:46:33.176018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.177 ms 00:25:24.179 [2024-05-15 12:46:33.176030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.191264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.191327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:24.438 [2024-05-15 12:46:33.191379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.185 ms 00:25:24.438 [2024-05-15 12:46:33.191391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.191974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.192009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:24.438 [2024-05-15 12:46:33.192028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:25:24.438 [2024-05-15 12:46:33.192042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.271800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.271911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:24.438 [2024-05-15 12:46:33.271953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.717 ms 00:25:24.438 [2024-05-15 12:46:33.271983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.287542] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:24.438 [2024-05-15 12:46:33.292009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.292064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:24.438 [2024-05-15 12:46:33.292102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.912 ms 00:25:24.438 [2024-05-15 12:46:33.292116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.292255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.292278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:24.438 [2024-05-15 12:46:33.292293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:24.438 [2024-05-15 12:46:33.292306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.292406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.292427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:24.438 [2024-05-15 12:46:33.292449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:24.438 [2024-05-15 12:46:33.292463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.294896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.294941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:24.438 [2024-05-15 12:46:33.294974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:25:24.438 [2024-05-15 12:46:33.294988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.295031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.295058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:24.438 [2024-05-15 12:46:33.295072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:24.438 [2024-05-15 12:46:33.295085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.295140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:24.438 [2024-05-15 12:46:33.295161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.295174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:24.438 [2024-05-15 12:46:33.295187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:24.438 [2024-05-15 12:46:33.295199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.325074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.325159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:24.438 [2024-05-15 12:46:33.325200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.838 ms 00:25:24.438 [2024-05-15 12:46:33.325229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.325361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.438 [2024-05-15 12:46:33.325382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:24.438 [2024-05-15 12:46:33.325397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:24.438 [2024-05-15 12:46:33.325412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.438 [2024-05-15 12:46:33.327180] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.419 ms, result 0 00:26:06.000  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (22 MBps) Copying: 94/1024 [MB] (25 MBps) Copying: 120/1024 [MB] (26 MBps) Copying: 146/1024 [MB] (25 MBps) Copying: 171/1024 [MB] (25 MBps) Copying: 196/1024 [MB] (24 MBps) Copying: 222/1024 [MB] (25 MBps) Copying: 247/1024 [MB] (25 MBps) Copying: 272/1024 [MB] (24 MBps) Copying: 297/1024 [MB] (25 MBps) Copying: 323/1024 [MB] (25 MBps) Copying: 349/1024 [MB] (25 MBps) Copying: 375/1024 [MB] (26 MBps) Copying: 401/1024 [MB] (26 MBps) Copying: 427/1024 [MB] (25 MBps) Copying: 453/1024 [MB] (25 MBps) Copying: 477/1024 [MB] (23 MBps) Copying: 502/1024 [MB] (25 MBps) Copying: 528/1024 [MB] (26 MBps) Copying: 554/1024 [MB] (25 MBps) Copying: 579/1024 [MB] (25 MBps) Copying: 604/1024 [MB] (25 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 655/1024 [MB] (25 MBps) Copying: 680/1024 [MB] (24 MBps) Copying: 706/1024 [MB] (25 MBps) Copying: 732/1024 [MB] (25 MBps) Copying: 758/1024 [MB] (25 MBps) Copying: 784/1024 [MB] (26 MBps) Copying: 811/1024 [MB] (26 MBps) Copying: 836/1024 [MB] (25 MBps) Copying: 862/1024 [MB] (25 MBps) Copying: 887/1024 [MB] (25 MBps) Copying: 912/1024 [MB] (24 MBps) Copying: 937/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 988/1024 [MB] (25 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-15 12:47:14.963946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.000 [2024-05-15 12:47:14.964049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:06.000 [2024-05-15 12:47:14.964088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:06.000 [2024-05-15 12:47:14.964101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.000 [2024-05-15 12:47:14.965435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.000 [2024-05-15 12:47:14.971392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.000 [2024-05-15 12:47:14.971615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:06.000 [2024-05-15 12:47:14.971745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.701 ms 00:26:06.000 [2024-05-15 12:47:14.971769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.000 [2024-05-15 12:47:14.985985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.000 [2024-05-15 12:47:14.986305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:06.000 [2024-05-15 12:47:14.986467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.534 ms 00:26:06.000 [2024-05-15 12:47:14.986669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.012145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.012440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:06.261 [2024-05-15 12:47:15.012473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.396 ms 00:26:06.261 [2024-05-15 12:47:15.012487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.019136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.019184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:06.261 [2024-05-15 12:47:15.019217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.579 ms 00:26:06.261 [2024-05-15 12:47:15.019228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.049285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.049328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:06.261 [2024-05-15 12:47:15.049360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.995 ms 00:26:06.261 [2024-05-15 12:47:15.049371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.066002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.066043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:06.261 [2024-05-15 12:47:15.066076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.590 ms 00:26:06.261 [2024-05-15 12:47:15.066087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.181347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.181445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:06.261 [2024-05-15 12:47:15.181482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.210 ms 00:26:06.261 [2024-05-15 12:47:15.181559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.211805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.211847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:06.261 [2024-05-15 12:47:15.211863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.220 ms 00:26:06.261 [2024-05-15 12:47:15.211874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.261 [2024-05-15 12:47:15.243009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.261 [2024-05-15 12:47:15.243066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:06.261 [2024-05-15 12:47:15.243101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.092 ms 00:26:06.261 [2024-05-15 12:47:15.243128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.521 [2024-05-15 12:47:15.274495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.521 [2024-05-15 12:47:15.274604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:06.521 [2024-05-15 12:47:15.274641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.305 ms 00:26:06.521 [2024-05-15 12:47:15.274653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.521 [2024-05-15 12:47:15.303788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.521 [2024-05-15 12:47:15.303845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:06.521 [2024-05-15 12:47:15.303877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.995 ms 00:26:06.521 [2024-05-15 12:47:15.303896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.521 [2024-05-15 12:47:15.303937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:06.521 [2024-05-15 12:47:15.303977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:26:06.521 [2024-05-15 12:47:15.303991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.521 [2024-05-15 12:47:15.304087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.304991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:06.522 [2024-05-15 12:47:15.305234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:06.523 [2024-05-15 12:47:15.305340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:06.523 [2024-05-15 12:47:15.305351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93cea669-6017-42ed-bc97-87991307ce19 00:26:06.523 [2024-05-15 12:47:15.305378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:26:06.523 [2024-05-15 12:47:15.305394] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:26:06.523 [2024-05-15 12:47:15.305405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:26:06.523 [2024-05-15 12:47:15.305417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:26:06.523 [2024-05-15 12:47:15.305427] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:06.523 [2024-05-15 12:47:15.305439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:06.523 [2024-05-15 12:47:15.305449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:06.523 [2024-05-15 12:47:15.305459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:06.523 [2024-05-15 12:47:15.305481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:06.523 [2024-05-15 12:47:15.305492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.523 [2024-05-15 12:47:15.305529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:06.523 [2024-05-15 12:47:15.305543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:26:06.523 [2024-05-15 12:47:15.305554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.322071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.523 [2024-05-15 12:47:15.322108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:06.523 [2024-05-15 12:47:15.322141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.459 ms 00:26:06.523 [2024-05-15 12:47:15.322151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.322388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.523 [2024-05-15 12:47:15.322403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:06.523 [2024-05-15 12:47:15.322414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:06.523 [2024-05-15 12:47:15.322424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.368255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.368321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.523 [2024-05-15 12:47:15.368357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.368370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.368461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.368478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.523 [2024-05-15 12:47:15.368490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.368502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.368656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.368676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.523 [2024-05-15 12:47:15.368689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.368700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.368725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.368739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.523 [2024-05-15 12:47:15.368751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.368762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.471164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.471233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:06.523 [2024-05-15 12:47:15.471277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.471288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.513291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:06.523 [2024-05-15 12:47:15.513326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.513338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.513470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:06.523 [2024-05-15 12:47:15.513482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.513492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.513622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:06.523 [2024-05-15 12:47:15.513634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.513646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.513795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:06.523 [2024-05-15 12:47:15.513808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.513819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.513886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:06.523 [2024-05-15 12:47:15.513928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.513939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.513998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.514019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:06.523 [2024-05-15 12:47:15.514031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.514043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.514097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.523 [2024-05-15 12:47:15.514113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:06.523 [2024-05-15 12:47:15.514125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.523 [2024-05-15 12:47:15.514137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.523 [2024-05-15 12:47:15.514294] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.241 ms, result 0 00:26:08.422 00:26:08.422 00:26:08.422 12:47:17 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:10.322 12:47:19 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:10.322 [2024-05-15 12:47:19.316792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:10.322 [2024-05-15 12:47:19.317172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78275 ] 00:26:10.581 [2024-05-15 12:47:19.485625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.840 [2024-05-15 12:47:19.748050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.098 [2024-05-15 12:47:20.094337] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.098 [2024-05-15 12:47:20.094408] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.358 [2024-05-15 12:47:20.253040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.253116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:11.358 [2024-05-15 12:47:20.253155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:11.358 [2024-05-15 12:47:20.253168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.253239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.253258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.358 [2024-05-15 12:47:20.253277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:11.358 [2024-05-15 12:47:20.253288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.253319] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:11.358 [2024-05-15 12:47:20.254338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:11.358 [2024-05-15 12:47:20.254384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.254399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.358 [2024-05-15 12:47:20.254413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:26:11.358 [2024-05-15 12:47:20.254425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.256484] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:11.358 [2024-05-15 12:47:20.272950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.273000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:11.358 [2024-05-15 12:47:20.273041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.468 ms 00:26:11.358 [2024-05-15 12:47:20.273053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.273132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.273152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:11.358 [2024-05-15 12:47:20.273164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:11.358 [2024-05-15 12:47:20.273175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.282743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.282812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.358 [2024-05-15 12:47:20.282848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.466 ms 00:26:11.358 [2024-05-15 12:47:20.282860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.283019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.283042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.358 [2024-05-15 12:47:20.283056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:11.358 [2024-05-15 12:47:20.283067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.283141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.283164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:11.358 [2024-05-15 12:47:20.283178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:11.358 [2024-05-15 12:47:20.283188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.283270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:11.358 [2024-05-15 12:47:20.288514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.288570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.358 [2024-05-15 12:47:20.288588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:26:11.358 [2024-05-15 12:47:20.288600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.288672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.288691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:11.358 [2024-05-15 12:47:20.288705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:11.358 [2024-05-15 12:47:20.288716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.288767] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:11.358 [2024-05-15 12:47:20.288801] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:11.358 [2024-05-15 12:47:20.288847] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:11.358 [2024-05-15 12:47:20.288868] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:11.358 [2024-05-15 12:47:20.288952] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:11.358 [2024-05-15 12:47:20.288969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:11.358 [2024-05-15 12:47:20.288984] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:11.358 [2024-05-15 12:47:20.288999] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:11.358 [2024-05-15 12:47:20.289013] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:11.358 [2024-05-15 12:47:20.289030] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:11.358 [2024-05-15 12:47:20.289042] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:11.358 [2024-05-15 12:47:20.289053] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:11.358 [2024-05-15 12:47:20.289064] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:11.358 [2024-05-15 12:47:20.289076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.289088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:11.358 [2024-05-15 12:47:20.289099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:26:11.358 [2024-05-15 12:47:20.289110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.289189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.358 [2024-05-15 12:47:20.289204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:11.358 [2024-05-15 12:47:20.289220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:11.358 [2024-05-15 12:47:20.289231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.358 [2024-05-15 12:47:20.289318] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:11.358 [2024-05-15 12:47:20.289334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:11.359 [2024-05-15 12:47:20.289346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:11.359 [2024-05-15 12:47:20.289380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:11.359 [2024-05-15 12:47:20.289412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.359 [2024-05-15 12:47:20.289433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:11.359 [2024-05-15 12:47:20.289442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:11.359 [2024-05-15 12:47:20.289453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.359 [2024-05-15 12:47:20.289463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:11.359 [2024-05-15 12:47:20.289473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:11.359 [2024-05-15 12:47:20.289484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:11.359 [2024-05-15 12:47:20.289537] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:11.359 [2024-05-15 12:47:20.289547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:11.359 [2024-05-15 12:47:20.289571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:11.359 [2024-05-15 12:47:20.289597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:11.359 [2024-05-15 12:47:20.289619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:11.359 [2024-05-15 12:47:20.289650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:11.359 [2024-05-15 12:47:20.289681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:11.359 [2024-05-15 12:47:20.289715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:11.359 [2024-05-15 12:47:20.289745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.359 [2024-05-15 12:47:20.289771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:11.359 [2024-05-15 12:47:20.289781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:11.359 [2024-05-15 12:47:20.289791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.359 [2024-05-15 12:47:20.289801] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:11.359 [2024-05-15 12:47:20.289812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:11.359 [2024-05-15 12:47:20.289824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.359 [2024-05-15 12:47:20.289851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:11.359 [2024-05-15 12:47:20.289862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:11.359 [2024-05-15 12:47:20.289872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:11.359 [2024-05-15 12:47:20.289886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:11.359 [2024-05-15 12:47:20.289896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:11.359 [2024-05-15 12:47:20.289907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:11.359 [2024-05-15 12:47:20.289918] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:11.359 [2024-05-15 12:47:20.289932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.359 [2024-05-15 12:47:20.289945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:11.359 [2024-05-15 12:47:20.289957] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:11.359 [2024-05-15 12:47:20.289968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:11.359 [2024-05-15 12:47:20.289980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:11.359 [2024-05-15 12:47:20.289991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:11.359 [2024-05-15 12:47:20.290003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:11.359 [2024-05-15 12:47:20.290014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:11.359 [2024-05-15 12:47:20.290025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:11.359 [2024-05-15 12:47:20.290037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:11.359 [2024-05-15 12:47:20.290048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:11.359 [2024-05-15 12:47:20.290059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:11.359 [2024-05-15 12:47:20.290071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:11.359 [2024-05-15 12:47:20.290083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:11.359 [2024-05-15 12:47:20.290094] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:11.359 [2024-05-15 12:47:20.290107] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.359 [2024-05-15 12:47:20.290119] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:11.359 [2024-05-15 12:47:20.290131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:11.359 [2024-05-15 12:47:20.290143] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:11.359 [2024-05-15 12:47:20.290154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:11.359 [2024-05-15 12:47:20.290167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.359 [2024-05-15 12:47:20.290179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:11.359 [2024-05-15 12:47:20.290191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:26:11.359 [2024-05-15 12:47:20.290202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.359 [2024-05-15 12:47:20.313926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.359 [2024-05-15 12:47:20.313994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.359 [2024-05-15 12:47:20.314031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.664 ms 00:26:11.359 [2024-05-15 12:47:20.314043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.359 [2024-05-15 12:47:20.314172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.359 [2024-05-15 12:47:20.314190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:11.359 [2024-05-15 12:47:20.314210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:11.359 [2024-05-15 12:47:20.314221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.368457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.368558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.619 [2024-05-15 12:47:20.368597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.146 ms 00:26:11.619 [2024-05-15 12:47:20.368625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.368712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.368729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.619 [2024-05-15 12:47:20.368743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:11.619 [2024-05-15 12:47:20.368758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.369398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.369417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.619 [2024-05-15 12:47:20.369429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:26:11.619 [2024-05-15 12:47:20.369440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.369632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.369655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.619 [2024-05-15 12:47:20.369668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:26:11.619 [2024-05-15 12:47:20.369678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.389431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.389691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.619 [2024-05-15 12:47:20.389814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.724 ms 00:26:11.619 [2024-05-15 12:47:20.389866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.406918] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:11.619 [2024-05-15 12:47:20.407182] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:11.619 [2024-05-15 12:47:20.407355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.407471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:11.619 [2024-05-15 12:47:20.407630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.240 ms 00:26:11.619 [2024-05-15 12:47:20.407736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.438401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.438624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:11.619 [2024-05-15 12:47:20.438769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.550 ms 00:26:11.619 [2024-05-15 12:47:20.438896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.455283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.455506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:11.619 [2024-05-15 12:47:20.455626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.222 ms 00:26:11.619 [2024-05-15 12:47:20.455777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.471615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.471777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:11.619 [2024-05-15 12:47:20.471897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:26:11.619 [2024-05-15 12:47:20.472003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.472644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.472787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:11.619 [2024-05-15 12:47:20.472897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:26:11.619 [2024-05-15 12:47:20.472946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.552717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.552994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:11.619 [2024-05-15 12:47:20.553117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.673 ms 00:26:11.619 [2024-05-15 12:47:20.553239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.566180] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:11.619 [2024-05-15 12:47:20.570501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.619 [2024-05-15 12:47:20.570568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:11.619 [2024-05-15 12:47:20.570589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.159 ms 00:26:11.619 [2024-05-15 12:47:20.570602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.619 [2024-05-15 12:47:20.570732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.570756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:11.620 [2024-05-15 12:47:20.570770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:11.620 [2024-05-15 12:47:20.570781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.572685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.572725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:11.620 [2024-05-15 12:47:20.572742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.833 ms 00:26:11.620 [2024-05-15 12:47:20.572754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.574960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.574999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:11.620 [2024-05-15 12:47:20.575035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:26:11.620 [2024-05-15 12:47:20.575045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.575083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.575098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:11.620 [2024-05-15 12:47:20.575110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:11.620 [2024-05-15 12:47:20.575122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.575189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:11.620 [2024-05-15 12:47:20.575208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.575219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:11.620 [2024-05-15 12:47:20.575231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:11.620 [2024-05-15 12:47:20.575247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.606105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.606152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.620 [2024-05-15 12:47:20.606187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.827 ms 00:26:11.620 [2024-05-15 12:47:20.606199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.606295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.620 [2024-05-15 12:47:20.606321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.620 [2024-05-15 12:47:20.606334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:11.620 [2024-05-15 12:47:20.606345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.620 [2024-05-15 12:47:20.613929] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.449 ms, result 0 00:26:50.052  Copying: 900/1048576 [kB] (900 kBps) Copying: 4440/1048576 [kB] (3540 kBps) Copying: 24/1024 [MB] (20 MBps) Copying: 52/1024 [MB] (28 MBps) Copying: 81/1024 [MB] (28 MBps) Copying: 109/1024 [MB] (28 MBps) Copying: 138/1024 [MB] (28 MBps) Copying: 166/1024 [MB] (28 MBps) Copying: 195/1024 [MB] (28 MBps) Copying: 224/1024 [MB] (28 MBps) Copying: 253/1024 [MB] (29 MBps) Copying: 282/1024 [MB] (28 MBps) Copying: 309/1024 [MB] (27 MBps) Copying: 338/1024 [MB] (28 MBps) Copying: 366/1024 [MB] (28 MBps) Copying: 393/1024 [MB] (27 MBps) Copying: 422/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (28 MBps) Copying: 480/1024 [MB] (29 MBps) Copying: 510/1024 [MB] (29 MBps) Copying: 538/1024 [MB] (28 MBps) Copying: 567/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (28 MBps) Copying: 624/1024 [MB] (28 MBps) Copying: 653/1024 [MB] (29 MBps) Copying: 682/1024 [MB] (29 MBps) Copying: 711/1024 [MB] (29 MBps) Copying: 741/1024 [MB] (29 MBps) Copying: 770/1024 [MB] (28 MBps) Copying: 798/1024 [MB] (28 MBps) Copying: 826/1024 [MB] (28 MBps) Copying: 854/1024 [MB] (28 MBps) Copying: 882/1024 [MB] (27 MBps) Copying: 910/1024 [MB] (28 MBps) Copying: 938/1024 [MB] (27 MBps) Copying: 967/1024 [MB] (28 MBps) Copying: 996/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 12:47:59.055638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.052 [2024-05-15 12:47:59.056109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.052 [2024-05-15 12:47:59.056385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:50.052 [2024-05-15 12:47:59.056484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.052 [2024-05-15 12:47:59.056744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.052 [2024-05-15 12:47:59.061191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.052 [2024-05-15 12:47:59.061345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.052 [2024-05-15 12:47:59.061471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.147 ms 00:26:50.052 [2024-05-15 12:47:59.061559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.052 [2024-05-15 12:47:59.061983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.052 [2024-05-15 12:47:59.062129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.052 [2024-05-15 12:47:59.062264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:50.052 [2024-05-15 12:47:59.062316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.074176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.074375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.312 [2024-05-15 12:47:59.074508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.728 ms 00:26:50.312 [2024-05-15 12:47:59.074643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.081401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.081574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:50.312 [2024-05-15 12:47:59.081718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:26:50.312 [2024-05-15 12:47:59.081752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.115768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.116019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.312 [2024-05-15 12:47:59.116179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.923 ms 00:26:50.312 [2024-05-15 12:47:59.116231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.133825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.133982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.312 [2024-05-15 12:47:59.134124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.451 ms 00:26:50.312 [2024-05-15 12:47:59.134178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.137255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.137403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.312 [2024-05-15 12:47:59.137547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:26:50.312 [2024-05-15 12:47:59.137675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.167710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.167882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:50.312 [2024-05-15 12:47:59.168002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.964 ms 00:26:50.312 [2024-05-15 12:47:59.168062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.199108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.199535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:50.312 [2024-05-15 12:47:59.199737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.856 ms 00:26:50.312 [2024-05-15 12:47:59.199802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.231942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.232008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.312 [2024-05-15 12:47:59.232028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.059 ms 00:26:50.312 [2024-05-15 12:47:59.232039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.262333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.312 [2024-05-15 12:47:59.262389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.312 [2024-05-15 12:47:59.262440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.183 ms 00:26:50.312 [2024-05-15 12:47:59.262451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.312 [2024-05-15 12:47:59.262531] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.312 [2024-05-15 12:47:59.262556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:50.312 [2024-05-15 12:47:59.262571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:50.312 [2024-05-15 12:47:59.262584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.312 [2024-05-15 12:47:59.262595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.312 [2024-05-15 12:47:59.262607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.312 [2024-05-15 12:47:59.262619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.312 [2024-05-15 12:47:59.262631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.262994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.313 [2024-05-15 12:47:59.263665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.314 [2024-05-15 12:47:59.263816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.314 [2024-05-15 12:47:59.263828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93cea669-6017-42ed-bc97-87991307ce19 00:26:50.314 [2024-05-15 12:47:59.263841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:50.314 [2024-05-15 12:47:59.263852] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137664 00:26:50.314 [2024-05-15 12:47:59.263863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135680 00:26:50.314 [2024-05-15 12:47:59.263875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:26:50.314 [2024-05-15 12:47:59.263895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.314 [2024-05-15 12:47:59.263906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.314 [2024-05-15 12:47:59.263917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.314 [2024-05-15 12:47:59.263927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.314 [2024-05-15 12:47:59.263937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.314 [2024-05-15 12:47:59.263948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.314 [2024-05-15 12:47:59.263959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.314 [2024-05-15 12:47:59.263970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:26:50.314 [2024-05-15 12:47:59.263981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.314 [2024-05-15 12:47:59.283571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.314 [2024-05-15 12:47:59.283666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.314 [2024-05-15 12:47:59.283715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.477 ms 00:26:50.314 [2024-05-15 12:47:59.283737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.314 [2024-05-15 12:47:59.284145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.314 [2024-05-15 12:47:59.284199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.314 [2024-05-15 12:47:59.284228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:26:50.314 [2024-05-15 12:47:59.284249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.354132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.354225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.573 [2024-05-15 12:47:59.354253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.354270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.354390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.354412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.573 [2024-05-15 12:47:59.354429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.354445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.354665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.354701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.573 [2024-05-15 12:47:59.354721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.354738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.354773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.354805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.573 [2024-05-15 12:47:59.354823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.354839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.464348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.464424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.573 [2024-05-15 12:47:59.464443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.464456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.505747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.505828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.573 [2024-05-15 12:47:59.505846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.505858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.505971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.505990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.573 [2024-05-15 12:47:59.506013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.506025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.506083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.506099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.573 [2024-05-15 12:47:59.506111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.506122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.506251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.506270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.573 [2024-05-15 12:47:59.506282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.506299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.506354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.506371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.573 [2024-05-15 12:47:59.506383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.573 [2024-05-15 12:47:59.506394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.573 [2024-05-15 12:47:59.506441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.573 [2024-05-15 12:47:59.506457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.574 [2024-05-15 12:47:59.506468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.574 [2024-05-15 12:47:59.506486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.574 [2024-05-15 12:47:59.506576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.574 [2024-05-15 12:47:59.506595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.574 [2024-05-15 12:47:59.506608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.574 [2024-05-15 12:47:59.506619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.574 [2024-05-15 12:47:59.506775] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.128 ms, result 0 00:26:52.020 00:26:52.020 00:26:52.020 12:48:00 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.922 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:53.922 12:48:02 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.922 [2024-05-15 12:48:02.901400] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:53.922 [2024-05-15 12:48:02.901598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78702 ] 00:26:54.180 [2024-05-15 12:48:03.081460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.492 [2024-05-15 12:48:03.352044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.750 [2024-05-15 12:48:03.697639] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:54.750 [2024-05-15 12:48:03.697726] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:55.010 [2024-05-15 12:48:03.856090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.856159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:55.010 [2024-05-15 12:48:03.856197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:55.010 [2024-05-15 12:48:03.856209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.856276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.856294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.010 [2024-05-15 12:48:03.856306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:55.010 [2024-05-15 12:48:03.856318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.856347] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:55.010 [2024-05-15 12:48:03.857288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:55.010 [2024-05-15 12:48:03.857333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.857348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.010 [2024-05-15 12:48:03.857362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:26:55.010 [2024-05-15 12:48:03.857374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.859450] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:55.010 [2024-05-15 12:48:03.874964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.875013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:55.010 [2024-05-15 12:48:03.875054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.514 ms 00:26:55.010 [2024-05-15 12:48:03.875065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.875148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.875168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:55.010 [2024-05-15 12:48:03.875181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:55.010 [2024-05-15 12:48:03.875191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.884795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.884855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.010 [2024-05-15 12:48:03.884890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.499 ms 00:26:55.010 [2024-05-15 12:48:03.884903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.885072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.885091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.010 [2024-05-15 12:48:03.885105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:26:55.010 [2024-05-15 12:48:03.885116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.885184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.885207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:55.010 [2024-05-15 12:48:03.885220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:55.010 [2024-05-15 12:48:03.885231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.885293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:55.010 [2024-05-15 12:48:03.890303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.890338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.010 [2024-05-15 12:48:03.890371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 00:26:55.010 [2024-05-15 12:48:03.890382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.890430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.890446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:55.010 [2024-05-15 12:48:03.890459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:55.010 [2024-05-15 12:48:03.890469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.890526] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:55.010 [2024-05-15 12:48:03.890560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:55.010 [2024-05-15 12:48:03.890601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:55.010 [2024-05-15 12:48:03.890619] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:55.010 [2024-05-15 12:48:03.890692] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:55.010 [2024-05-15 12:48:03.890706] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:55.010 [2024-05-15 12:48:03.890720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:55.010 [2024-05-15 12:48:03.890734] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:55.010 [2024-05-15 12:48:03.890747] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:55.010 [2024-05-15 12:48:03.890763] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:55.010 [2024-05-15 12:48:03.890774] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:55.010 [2024-05-15 12:48:03.890784] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:55.010 [2024-05-15 12:48:03.890795] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:55.010 [2024-05-15 12:48:03.890806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.890817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:55.010 [2024-05-15 12:48:03.890828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:26:55.010 [2024-05-15 12:48:03.890838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.890903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.010 [2024-05-15 12:48:03.890918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:55.010 [2024-05-15 12:48:03.890933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:55.010 [2024-05-15 12:48:03.890944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.010 [2024-05-15 12:48:03.891024] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:55.010 [2024-05-15 12:48:03.891040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:55.010 [2024-05-15 12:48:03.891052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:55.010 [2024-05-15 12:48:03.891083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:55.010 [2024-05-15 12:48:03.891116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:55.010 [2024-05-15 12:48:03.891136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:55.010 [2024-05-15 12:48:03.891145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:55.010 [2024-05-15 12:48:03.891155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:55.010 [2024-05-15 12:48:03.891168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:55.010 [2024-05-15 12:48:03.891179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:55.010 [2024-05-15 12:48:03.891189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:55.010 [2024-05-15 12:48:03.891211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:55.010 [2024-05-15 12:48:03.891221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:55.010 [2024-05-15 12:48:03.891241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:55.010 [2024-05-15 12:48:03.891273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:55.010 [2024-05-15 12:48:03.891295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:55.010 [2024-05-15 12:48:03.891324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:55.010 [2024-05-15 12:48:03.891354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:55.010 [2024-05-15 12:48:03.891383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:55.010 [2024-05-15 12:48:03.891393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:55.010 [2024-05-15 12:48:03.891419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:55.011 [2024-05-15 12:48:03.891430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:55.011 [2024-05-15 12:48:03.891441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:55.011 [2024-05-15 12:48:03.891451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:55.011 [2024-05-15 12:48:03.891461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:55.011 [2024-05-15 12:48:03.891472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:55.011 [2024-05-15 12:48:03.891482] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:55.011 [2024-05-15 12:48:03.891493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:55.011 [2024-05-15 12:48:03.891504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:55.011 [2024-05-15 12:48:03.891533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:55.011 [2024-05-15 12:48:03.891551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:55.011 [2024-05-15 12:48:03.891564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:55.011 [2024-05-15 12:48:03.891575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:55.011 [2024-05-15 12:48:03.891585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:55.011 [2024-05-15 12:48:03.891595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:55.011 [2024-05-15 12:48:03.891606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:55.011 [2024-05-15 12:48:03.891618] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:55.011 [2024-05-15 12:48:03.891632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.011 [2024-05-15 12:48:03.891645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:55.011 [2024-05-15 12:48:03.891656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:55.011 [2024-05-15 12:48:03.891668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:55.011 [2024-05-15 12:48:03.891679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:55.011 [2024-05-15 12:48:03.891691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:55.011 [2024-05-15 12:48:03.891702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:55.011 [2024-05-15 12:48:03.891713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:55.011 [2024-05-15 12:48:03.891724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:55.011 [2024-05-15 12:48:03.891735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:55.011 [2024-05-15 12:48:03.891746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:55.011 [2024-05-15 12:48:03.891757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:55.011 [2024-05-15 12:48:03.891769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:55.011 [2024-05-15 12:48:03.891796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:55.011 [2024-05-15 12:48:03.891807] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:55.011 [2024-05-15 12:48:03.891820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.011 [2024-05-15 12:48:03.891832] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:55.011 [2024-05-15 12:48:03.891843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:55.011 [2024-05-15 12:48:03.891854] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:55.011 [2024-05-15 12:48:03.891867] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:55.011 [2024-05-15 12:48:03.891879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.891891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:55.011 [2024-05-15 12:48:03.891902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:26:55.011 [2024-05-15 12:48:03.891913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.915238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.915449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.011 [2024-05-15 12:48:03.915626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.270 ms 00:26:55.011 [2024-05-15 12:48:03.915750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.915921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.915990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:55.011 [2024-05-15 12:48:03.916097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:55.011 [2024-05-15 12:48:03.916155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.967490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.967801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.011 [2024-05-15 12:48:03.967940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.187 ms 00:26:55.011 [2024-05-15 12:48:03.968068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.968206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.968258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.011 [2024-05-15 12:48:03.968380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:55.011 [2024-05-15 12:48:03.968431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.969166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.969305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.011 [2024-05-15 12:48:03.969442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:26:55.011 [2024-05-15 12:48:03.969516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.969801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.969866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.011 [2024-05-15 12:48:03.969979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:26:55.011 [2024-05-15 12:48:03.970029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:03.990842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:03.991146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.011 [2024-05-15 12:48:03.991194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.694 ms 00:26:55.011 [2024-05-15 12:48:03.991212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.011 [2024-05-15 12:48:04.008125] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:55.011 [2024-05-15 12:48:04.008168] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:55.011 [2024-05-15 12:48:04.008202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.011 [2024-05-15 12:48:04.008214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:55.011 [2024-05-15 12:48:04.008226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.831 ms 00:26:55.011 [2024-05-15 12:48:04.008237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.270 [2024-05-15 12:48:04.035916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.270 [2024-05-15 12:48:04.035990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:55.271 [2024-05-15 12:48:04.036027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.629 ms 00:26:55.271 [2024-05-15 12:48:04.036039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.052087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.052134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:55.271 [2024-05-15 12:48:04.052169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.932 ms 00:26:55.271 [2024-05-15 12:48:04.052180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.066533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.066618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:55.271 [2024-05-15 12:48:04.066637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.306 ms 00:26:55.271 [2024-05-15 12:48:04.066648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.067208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.067250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:55.271 [2024-05-15 12:48:04.067268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:26:55.271 [2024-05-15 12:48:04.067280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.142501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.142581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:55.271 [2024-05-15 12:48:04.142619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.178 ms 00:26:55.271 [2024-05-15 12:48:04.142632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.154218] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:55.271 [2024-05-15 12:48:04.158292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.158328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:55.271 [2024-05-15 12:48:04.158363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.583 ms 00:26:55.271 [2024-05-15 12:48:04.158375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.158510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.158546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:55.271 [2024-05-15 12:48:04.158561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:55.271 [2024-05-15 12:48:04.158573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.159668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.159705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:55.271 [2024-05-15 12:48:04.159720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:26:55.271 [2024-05-15 12:48:04.159733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.162053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.162096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:55.271 [2024-05-15 12:48:04.162133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:26:55.271 [2024-05-15 12:48:04.162145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.162183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.162198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:55.271 [2024-05-15 12:48:04.162219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:55.271 [2024-05-15 12:48:04.162230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.162282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:55.271 [2024-05-15 12:48:04.162299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.162310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:55.271 [2024-05-15 12:48:04.162323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:55.271 [2024-05-15 12:48:04.162339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.195229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.195584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:55.271 [2024-05-15 12:48:04.195718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.856 ms 00:26:55.271 [2024-05-15 12:48:04.195743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.195862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.271 [2024-05-15 12:48:04.195897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:55.271 [2024-05-15 12:48:04.195912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:55.271 [2024-05-15 12:48:04.195925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.271 [2024-05-15 12:48:04.197473] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.799 ms, result 0 00:27:35.796  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 203/1024 [MB] (24 MBps) Copying: 229/1024 [MB] (25 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 281/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (25 MBps) Copying: 332/1024 [MB] (25 MBps) Copying: 358/1024 [MB] (25 MBps) Copying: 382/1024 [MB] (24 MBps) Copying: 408/1024 [MB] (25 MBps) Copying: 434/1024 [MB] (26 MBps) Copying: 460/1024 [MB] (26 MBps) Copying: 486/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (23 MBps) Copying: 531/1024 [MB] (21 MBps) Copying: 555/1024 [MB] (23 MBps) Copying: 580/1024 [MB] (25 MBps) Copying: 606/1024 [MB] (25 MBps) Copying: 631/1024 [MB] (25 MBps) Copying: 657/1024 [MB] (26 MBps) Copying: 684/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (26 MBps) Copying: 736/1024 [MB] (26 MBps) Copying: 763/1024 [MB] (26 MBps) Copying: 790/1024 [MB] (26 MBps) Copying: 814/1024 [MB] (23 MBps) Copying: 838/1024 [MB] (24 MBps) Copying: 864/1024 [MB] (26 MBps) Copying: 891/1024 [MB] (26 MBps) Copying: 916/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 994/1024 [MB] (26 MBps) Copying: 1021/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-15 12:48:44.742598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.742968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:35.796 [2024-05-15 12:48:44.743013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:35.796 [2024-05-15 12:48:44.743038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.796 [2024-05-15 12:48:44.743106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:35.796 [2024-05-15 12:48:44.749956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.750016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:35.796 [2024-05-15 12:48:44.750046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.814 ms 00:27:35.796 [2024-05-15 12:48:44.750079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.796 [2024-05-15 12:48:44.750556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.750586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:35.796 [2024-05-15 12:48:44.750618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:27:35.796 [2024-05-15 12:48:44.750640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.796 [2024-05-15 12:48:44.754827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.754872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:35.796 [2024-05-15 12:48:44.754899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:27:35.796 [2024-05-15 12:48:44.754920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.796 [2024-05-15 12:48:44.761741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.761776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:27:35.796 [2024-05-15 12:48:44.761793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.753 ms 00:27:35.796 [2024-05-15 12:48:44.761805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.796 [2024-05-15 12:48:44.794373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.796 [2024-05-15 12:48:44.794427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:35.796 [2024-05-15 12:48:44.794447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.442 ms 00:27:35.796 [2024-05-15 12:48:44.794459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.054 [2024-05-15 12:48:44.812343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.054 [2024-05-15 12:48:44.812385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:36.054 [2024-05-15 12:48:44.812404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.813 ms 00:27:36.054 [2024-05-15 12:48:44.812417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.054 [2024-05-15 12:48:44.816161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.054 [2024-05-15 12:48:44.816207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:36.054 [2024-05-15 12:48:44.816223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:27:36.054 [2024-05-15 12:48:44.816250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.054 [2024-05-15 12:48:44.848453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.054 [2024-05-15 12:48:44.848508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:36.054 [2024-05-15 12:48:44.848528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.172 ms 00:27:36.054 [2024-05-15 12:48:44.848565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.054 [2024-05-15 12:48:44.878744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.054 [2024-05-15 12:48:44.878799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:36.054 [2024-05-15 12:48:44.878818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.114 ms 00:27:36.054 [2024-05-15 12:48:44.878830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.054 [2024-05-15 12:48:44.909837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.054 [2024-05-15 12:48:44.910081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:36.054 [2024-05-15 12:48:44.910225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.926 ms 00:27:36.054 [2024-05-15 12:48:44.910277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.622 [2024-05-15 12:48:45.334907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.622 [2024-05-15 12:48:45.335175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:36.622 [2024-05-15 12:48:45.335298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 424.481 ms 00:27:36.622 [2024-05-15 12:48:45.335348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.622 [2024-05-15 12:48:45.335412] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:36.622 [2024-05-15 12:48:45.335438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.622 [2024-05-15 12:48:45.335455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:27:36.622 [2024-05-15 12:48:45.335468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.335992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:36.622 [2024-05-15 12:48:45.336792] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:36.622 [2024-05-15 12:48:45.336804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93cea669-6017-42ed-bc97-87991307ce19 00:27:36.622 [2024-05-15 12:48:45.336829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:27:36.622 [2024-05-15 12:48:45.336841] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:36.622 [2024-05-15 12:48:45.336853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:36.622 [2024-05-15 12:48:45.336865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:36.622 [2024-05-15 12:48:45.336876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:36.622 [2024-05-15 12:48:45.336888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:36.622 [2024-05-15 12:48:45.336900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:36.622 [2024-05-15 12:48:45.336911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:36.622 [2024-05-15 12:48:45.336922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:36.622 [2024-05-15 12:48:45.336933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.622 [2024-05-15 12:48:45.336951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:36.622 [2024-05-15 12:48:45.336965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:27:36.622 [2024-05-15 12:48:45.336992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.622 [2024-05-15 12:48:45.353996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.622 [2024-05-15 12:48:45.354040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:36.622 [2024-05-15 12:48:45.354067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.942 ms 00:27:36.623 [2024-05-15 12:48:45.354086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.354373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.623 [2024-05-15 12:48:45.354394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:36.623 [2024-05-15 12:48:45.354416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:27:36.623 [2024-05-15 12:48:45.354428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.402564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.402634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.623 [2024-05-15 12:48:45.402658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.402671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.402776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.402792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.623 [2024-05-15 12:48:45.402813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.402825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.402947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.402975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.623 [2024-05-15 12:48:45.403000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.403021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.403064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.403084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.623 [2024-05-15 12:48:45.403097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.403117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.509229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.509297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.623 [2024-05-15 12:48:45.509331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.509343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.548960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.623 [2024-05-15 12:48:45.549046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.623 [2024-05-15 12:48:45.549196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.623 [2024-05-15 12:48:45.549289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.623 [2024-05-15 12:48:45.549481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:36.623 [2024-05-15 12:48:45.549615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.623 [2024-05-15 12:48:45.549709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.623 [2024-05-15 12:48:45.549793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.623 [2024-05-15 12:48:45.549806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.623 [2024-05-15 12:48:45.549818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.623 [2024-05-15 12:48:45.549966] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 807.360 ms, result 0 00:27:37.997 00:27:37.997 00:27:37.997 12:48:46 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:40.596 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76762 00:27:40.596 12:48:49 -- common/autotest_common.sh@926 -- # '[' -z 76762 ']' 00:27:40.596 12:48:49 -- common/autotest_common.sh@930 -- # kill -0 76762 00:27:40.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (76762) - No such process 00:27:40.596 Process with pid 76762 is not found 00:27:40.596 12:48:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 76762 is not found' 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:40.596 Remove shared memory files 00:27:40.596 12:48:49 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:40.596 12:48:49 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:40.596 12:48:49 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:40.596 12:48:49 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:40.596 12:48:49 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:40.596 12:48:49 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:40.596 12:48:49 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:40.596 ************************************ 00:27:40.596 END TEST ftl_dirty_shutdown 00:27:40.596 ************************************ 00:27:40.596 00:27:40.596 real 3m54.823s 00:27:40.596 user 4m29.496s 00:27:40.596 sys 0m38.529s 00:27:40.596 12:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.596 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:27:40.855 12:48:49 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:27:40.855 12:48:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:40.855 12:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.855 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:27:40.855 ************************************ 00:27:40.855 START TEST ftl_upgrade_shutdown 00:27:40.855 ************************************ 00:27:40.855 12:48:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:27:40.855 * Looking for test storage... 00:27:40.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:40.855 12:48:49 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:40.855 12:48:49 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.855 12:48:49 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.855 12:48:49 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:40.855 12:48:49 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:40.855 12:48:49 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.855 12:48:49 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:40.855 12:48:49 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:40.855 12:48:49 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.855 12:48:49 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.855 12:48:49 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:40.855 12:48:49 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:40.855 12:48:49 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.855 12:48:49 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.855 12:48:49 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:40.855 12:48:49 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:40.855 12:48:49 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.855 12:48:49 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.855 12:48:49 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:40.855 12:48:49 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:40.855 12:48:49 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.855 12:48:49 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.855 12:48:49 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.855 12:48:49 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.855 12:48:49 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:40.855 12:48:49 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:40.855 12:48:49 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.855 12:48:49 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:40.855 12:48:49 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:40.855 12:48:49 -- ftl/common.sh@81 -- # local base_bdev= 00:27:40.855 12:48:49 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:40.855 12:48:49 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.855 12:48:49 -- ftl/common.sh@89 -- # spdk_tgt_pid=79231 00:27:40.855 12:48:49 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:40.855 12:48:49 -- ftl/common.sh@91 -- # waitforlisten 79231 00:27:40.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.855 12:48:49 -- common/autotest_common.sh@819 -- # '[' -z 79231 ']' 00:27:40.855 12:48:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.855 12:48:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:40.855 12:48:49 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:40.855 12:48:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.855 12:48:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:40.855 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:27:40.855 [2024-05-15 12:48:49.848292] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:40.855 [2024-05-15 12:48:49.848901] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79231 ] 00:27:41.114 [2024-05-15 12:48:50.018782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.371 [2024-05-15 12:48:50.292446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:41.371 [2024-05-15 12:48:50.292739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.808 12:48:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:42.808 12:48:51 -- common/autotest_common.sh@852 -- # return 0 00:27:42.808 12:48:51 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:42.808 12:48:51 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:42.808 12:48:51 -- ftl/common.sh@99 -- # local params 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:42.808 12:48:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:42.808 12:48:51 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:42.808 12:48:51 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:27:42.808 12:48:51 -- ftl/common.sh@54 -- # local name=base 00:27:42.808 12:48:51 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:27:42.808 12:48:51 -- ftl/common.sh@56 -- # local size=20480 00:27:42.808 12:48:51 -- ftl/common.sh@59 -- # local base_bdev 00:27:42.808 12:48:51 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:27:43.065 12:48:51 -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:43.065 12:48:51 -- ftl/common.sh@62 -- # local base_size 00:27:43.065 12:48:51 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:43.065 12:48:51 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:27:43.065 12:48:51 -- common/autotest_common.sh@1358 -- # local bdev_info 00:27:43.065 12:48:51 -- common/autotest_common.sh@1359 -- # local bs 00:27:43.065 12:48:51 -- common/autotest_common.sh@1360 -- # local nb 00:27:43.065 12:48:51 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:43.323 12:48:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:27:43.323 { 00:27:43.323 "name": "basen1", 00:27:43.323 "aliases": [ 00:27:43.323 "506898d0-1306-4758-86b2-416b785d59d9" 00:27:43.323 ], 00:27:43.323 "product_name": "NVMe disk", 00:27:43.323 "block_size": 4096, 00:27:43.323 "num_blocks": 1310720, 00:27:43.323 "uuid": "506898d0-1306-4758-86b2-416b785d59d9", 00:27:43.323 "assigned_rate_limits": { 00:27:43.323 "rw_ios_per_sec": 0, 00:27:43.323 "rw_mbytes_per_sec": 0, 00:27:43.323 "r_mbytes_per_sec": 0, 00:27:43.323 "w_mbytes_per_sec": 0 00:27:43.323 }, 00:27:43.323 "claimed": true, 00:27:43.323 "claim_type": "read_many_write_one", 00:27:43.323 "zoned": false, 00:27:43.323 "supported_io_types": { 00:27:43.323 "read": true, 00:27:43.323 "write": true, 00:27:43.323 "unmap": true, 00:27:43.323 "write_zeroes": true, 00:27:43.323 "flush": true, 00:27:43.323 "reset": true, 00:27:43.323 "compare": true, 00:27:43.323 "compare_and_write": false, 00:27:43.323 "abort": true, 00:27:43.323 "nvme_admin": true, 00:27:43.323 "nvme_io": true 00:27:43.323 }, 00:27:43.323 "driver_specific": { 00:27:43.323 "nvme": [ 00:27:43.323 { 00:27:43.323 "pci_address": "0000:00:07.0", 00:27:43.323 "trid": { 00:27:43.323 "trtype": "PCIe", 00:27:43.323 "traddr": "0000:00:07.0" 00:27:43.323 }, 00:27:43.323 "ctrlr_data": { 00:27:43.323 "cntlid": 0, 00:27:43.323 "vendor_id": "0x1b36", 00:27:43.323 "model_number": "QEMU NVMe Ctrl", 00:27:43.323 "serial_number": "12341", 00:27:43.323 "firmware_revision": "8.0.0", 00:27:43.323 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:43.323 "oacs": { 00:27:43.323 "security": 0, 00:27:43.323 "format": 1, 00:27:43.323 "firmware": 0, 00:27:43.323 "ns_manage": 1 00:27:43.323 }, 00:27:43.323 "multi_ctrlr": false, 00:27:43.323 "ana_reporting": false 00:27:43.323 }, 00:27:43.323 "vs": { 00:27:43.323 "nvme_version": "1.4" 00:27:43.323 }, 00:27:43.323 "ns_data": { 00:27:43.323 "id": 1, 00:27:43.323 "can_share": false 00:27:43.323 } 00:27:43.323 } 00:27:43.323 ], 00:27:43.323 "mp_policy": "active_passive" 00:27:43.323 } 00:27:43.323 } 00:27:43.323 ]' 00:27:43.323 12:48:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:27:43.323 12:48:52 -- common/autotest_common.sh@1362 -- # bs=4096 00:27:43.323 12:48:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:27:43.323 12:48:52 -- common/autotest_common.sh@1363 -- # nb=1310720 00:27:43.323 12:48:52 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:27:43.323 12:48:52 -- common/autotest_common.sh@1367 -- # echo 5120 00:27:43.323 12:48:52 -- ftl/common.sh@63 -- # base_size=5120 00:27:43.323 12:48:52 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:43.323 12:48:52 -- ftl/common.sh@67 -- # clear_lvols 00:27:43.323 12:48:52 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:43.323 12:48:52 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:43.581 12:48:52 -- ftl/common.sh@28 -- # stores=1ea88be1-b9a7-4c6c-8698-2647820af5ce 00:27:43.581 12:48:52 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:43.581 12:48:52 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ea88be1-b9a7-4c6c-8698-2647820af5ce 00:27:43.838 12:48:52 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:44.096 12:48:52 -- ftl/common.sh@68 -- # lvs=342e5504-974f-4553-a377-1889b02a030b 00:27:44.096 12:48:52 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 342e5504-974f-4553-a377-1889b02a030b 00:27:44.354 12:48:53 -- ftl/common.sh@107 -- # base_bdev=76e240d7-950c-4117-811a-47ebf1e03812 00:27:44.354 12:48:53 -- ftl/common.sh@108 -- # [[ -z 76e240d7-950c-4117-811a-47ebf1e03812 ]] 00:27:44.354 12:48:53 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 76e240d7-950c-4117-811a-47ebf1e03812 5120 00:27:44.354 12:48:53 -- ftl/common.sh@35 -- # local name=cache 00:27:44.354 12:48:53 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:27:44.354 12:48:53 -- ftl/common.sh@37 -- # local base_bdev=76e240d7-950c-4117-811a-47ebf1e03812 00:27:44.354 12:48:53 -- ftl/common.sh@38 -- # local cache_size=5120 00:27:44.354 12:48:53 -- ftl/common.sh@41 -- # get_bdev_size 76e240d7-950c-4117-811a-47ebf1e03812 00:27:44.354 12:48:53 -- common/autotest_common.sh@1357 -- # local bdev_name=76e240d7-950c-4117-811a-47ebf1e03812 00:27:44.354 12:48:53 -- common/autotest_common.sh@1358 -- # local bdev_info 00:27:44.354 12:48:53 -- common/autotest_common.sh@1359 -- # local bs 00:27:44.354 12:48:53 -- common/autotest_common.sh@1360 -- # local nb 00:27:44.354 12:48:53 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76e240d7-950c-4117-811a-47ebf1e03812 00:27:44.612 12:48:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:27:44.613 { 00:27:44.613 "name": "76e240d7-950c-4117-811a-47ebf1e03812", 00:27:44.613 "aliases": [ 00:27:44.613 "lvs/basen1p0" 00:27:44.613 ], 00:27:44.613 "product_name": "Logical Volume", 00:27:44.613 "block_size": 4096, 00:27:44.613 "num_blocks": 5242880, 00:27:44.613 "uuid": "76e240d7-950c-4117-811a-47ebf1e03812", 00:27:44.613 "assigned_rate_limits": { 00:27:44.613 "rw_ios_per_sec": 0, 00:27:44.613 "rw_mbytes_per_sec": 0, 00:27:44.613 "r_mbytes_per_sec": 0, 00:27:44.613 "w_mbytes_per_sec": 0 00:27:44.613 }, 00:27:44.613 "claimed": false, 00:27:44.613 "zoned": false, 00:27:44.613 "supported_io_types": { 00:27:44.613 "read": true, 00:27:44.613 "write": true, 00:27:44.613 "unmap": true, 00:27:44.613 "write_zeroes": true, 00:27:44.613 "flush": false, 00:27:44.613 "reset": true, 00:27:44.613 "compare": false, 00:27:44.613 "compare_and_write": false, 00:27:44.613 "abort": false, 00:27:44.613 "nvme_admin": false, 00:27:44.613 "nvme_io": false 00:27:44.613 }, 00:27:44.613 "driver_specific": { 00:27:44.613 "lvol": { 00:27:44.613 "lvol_store_uuid": "342e5504-974f-4553-a377-1889b02a030b", 00:27:44.613 "base_bdev": "basen1", 00:27:44.613 "thin_provision": true, 00:27:44.613 "snapshot": false, 00:27:44.613 "clone": false, 00:27:44.613 "esnap_clone": false 00:27:44.613 } 00:27:44.613 } 00:27:44.613 } 00:27:44.613 ]' 00:27:44.613 12:48:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:27:44.613 12:48:53 -- common/autotest_common.sh@1362 -- # bs=4096 00:27:44.613 12:48:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:27:44.613 12:48:53 -- common/autotest_common.sh@1363 -- # nb=5242880 00:27:44.613 12:48:53 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:27:44.613 12:48:53 -- common/autotest_common.sh@1367 -- # echo 20480 00:27:44.613 12:48:53 -- ftl/common.sh@41 -- # local base_size=1024 00:27:44.613 12:48:53 -- ftl/common.sh@44 -- # local nvc_bdev 00:27:44.613 12:48:53 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:27:44.873 12:48:53 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:44.873 12:48:53 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:44.873 12:48:53 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:45.161 12:48:54 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:45.161 12:48:54 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:45.161 12:48:54 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 76e240d7-950c-4117-811a-47ebf1e03812 -c cachen1p0 --l2p_dram_limit 2 00:27:45.421 [2024-05-15 12:48:54.364400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.364468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:45.421 [2024-05-15 12:48:54.364527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:45.421 [2024-05-15 12:48:54.364559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.364644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.364662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:45.421 [2024-05-15 12:48:54.364677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:45.421 [2024-05-15 12:48:54.364696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.364729] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:45.421 [2024-05-15 12:48:54.365781] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:45.421 [2024-05-15 12:48:54.365834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.365850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:45.421 [2024-05-15 12:48:54.365867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.109 ms 00:27:45.421 [2024-05-15 12:48:54.365879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.366020] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 55113400-f21e-4571-87ef-32d93949112a 00:27:45.421 [2024-05-15 12:48:54.367913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.367956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:45.421 [2024-05-15 12:48:54.367973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:45.421 [2024-05-15 12:48:54.367988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.378044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.378127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:45.421 [2024-05-15 12:48:54.378147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.986 ms 00:27:45.421 [2024-05-15 12:48:54.378161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.378222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.378242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:45.421 [2024-05-15 12:48:54.378255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:45.421 [2024-05-15 12:48:54.378270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.378363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.378384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:45.421 [2024-05-15 12:48:54.378397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:45.421 [2024-05-15 12:48:54.378411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.378459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:45.421 [2024-05-15 12:48:54.384015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.384074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:45.421 [2024-05-15 12:48:54.384110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.567 ms 00:27:45.421 [2024-05-15 12:48:54.384122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.384165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.384180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:45.421 [2024-05-15 12:48:54.384195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:45.421 [2024-05-15 12:48:54.384206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.384252] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:45.421 [2024-05-15 12:48:54.384394] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:45.421 [2024-05-15 12:48:54.384416] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:45.421 [2024-05-15 12:48:54.384447] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:45.421 [2024-05-15 12:48:54.384463] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:45.421 [2024-05-15 12:48:54.384476] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:45.421 [2024-05-15 12:48:54.384490] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:45.421 [2024-05-15 12:48:54.384501] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:45.421 [2024-05-15 12:48:54.384513] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:45.421 [2024-05-15 12:48:54.384566] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:45.421 [2024-05-15 12:48:54.384600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.384612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:45.421 [2024-05-15 12:48:54.384626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.350 ms 00:27:45.421 [2024-05-15 12:48:54.384653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.384731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.421 [2024-05-15 12:48:54.384745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:45.421 [2024-05-15 12:48:54.384773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:45.421 [2024-05-15 12:48:54.384785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.421 [2024-05-15 12:48:54.384877] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:45.421 [2024-05-15 12:48:54.384895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:45.421 [2024-05-15 12:48:54.384909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:45.421 [2024-05-15 12:48:54.384921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.421 [2024-05-15 12:48:54.384935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:45.421 [2024-05-15 12:48:54.384946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:45.421 [2024-05-15 12:48:54.384976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:45.421 [2024-05-15 12:48:54.384987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:45.421 [2024-05-15 12:48:54.385000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:45.421 [2024-05-15 12:48:54.385010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.421 [2024-05-15 12:48:54.385022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:45.422 [2024-05-15 12:48:54.385033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:45.422 [2024-05-15 12:48:54.385047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:45.422 [2024-05-15 12:48:54.385072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:45.422 [2024-05-15 12:48:54.385108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:45.422 [2024-05-15 12:48:54.385120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:45.422 [2024-05-15 12:48:54.385144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:45.422 [2024-05-15 12:48:54.385155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:45.422 [2024-05-15 12:48:54.385179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:45.422 [2024-05-15 12:48:54.385214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:45.422 [2024-05-15 12:48:54.385247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:45.422 [2024-05-15 12:48:54.385288] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:45.422 [2024-05-15 12:48:54.385322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:45.422 [2024-05-15 12:48:54.385360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385384] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:45.422 [2024-05-15 12:48:54.385395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:45.422 [2024-05-15 12:48:54.385409] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:45.422 [2024-05-15 12:48:54.385435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:45.422 [2024-05-15 12:48:54.385447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:45.422 [2024-05-15 12:48:54.385460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:45.422 [2024-05-15 12:48:54.385471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:45.422 [2024-05-15 12:48:54.385486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:45.422 [2024-05-15 12:48:54.385520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:45.422 [2024-05-15 12:48:54.385538] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:45.422 [2024-05-15 12:48:54.385553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:45.422 [2024-05-15 12:48:54.385580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:45.422 [2024-05-15 12:48:54.385619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:45.422 [2024-05-15 12:48:54.385630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:45.422 [2024-05-15 12:48:54.385644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:45.422 [2024-05-15 12:48:54.385655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:45.422 [2024-05-15 12:48:54.385726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:45.422 [2024-05-15 12:48:54.385737] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:45.422 [2024-05-15 12:48:54.385758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.422 [2024-05-15 12:48:54.385785] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:45.422 [2024-05-15 12:48:54.385806] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:45.422 [2024-05-15 12:48:54.385820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:45.422 [2024-05-15 12:48:54.385833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.422 [2024-05-15 12:48:54.385848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:45.422 [2024-05-15 12:48:54.385860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.006 ms 00:27:45.422 [2024-05-15 12:48:54.385873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.422 [2024-05-15 12:48:54.407852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.422 [2024-05-15 12:48:54.407944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:45.422 [2024-05-15 12:48:54.407964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.918 ms 00:27:45.422 [2024-05-15 12:48:54.407978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.422 [2024-05-15 12:48:54.408040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.422 [2024-05-15 12:48:54.408060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:45.422 [2024-05-15 12:48:54.408072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:45.422 [2024-05-15 12:48:54.408085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.453558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.453644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:45.681 [2024-05-15 12:48:54.453666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.381 ms 00:27:45.681 [2024-05-15 12:48:54.453681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.453747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.453770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:45.681 [2024-05-15 12:48:54.453783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:45.681 [2024-05-15 12:48:54.453797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.454445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.454474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:45.681 [2024-05-15 12:48:54.454502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:27:45.681 [2024-05-15 12:48:54.454519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.454588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.454620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:45.681 [2024-05-15 12:48:54.454632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:45.681 [2024-05-15 12:48:54.454662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.476760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.476831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:45.681 [2024-05-15 12:48:54.476869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.068 ms 00:27:45.681 [2024-05-15 12:48:54.476884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.494403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:45.681 [2024-05-15 12:48:54.495924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.495959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:45.681 [2024-05-15 12:48:54.496006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.858 ms 00:27:45.681 [2024-05-15 12:48:54.496019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.525806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.681 [2024-05-15 12:48:54.525887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:45.681 [2024-05-15 12:48:54.525915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.721 ms 00:27:45.681 [2024-05-15 12:48:54.525927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.681 [2024-05-15 12:48:54.526006] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:27:45.681 [2024-05-15 12:48:54.526026] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:27:48.962 [2024-05-15 12:48:57.734067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.734170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:48.962 [2024-05-15 12:48:57.734210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3208.066 ms 00:27:48.962 [2024-05-15 12:48:57.734223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.734350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.734369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:48.962 [2024-05-15 12:48:57.734385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:27:48.962 [2024-05-15 12:48:57.734396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.763248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.763291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:48.962 [2024-05-15 12:48:57.763328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.788 ms 00:27:48.962 [2024-05-15 12:48:57.763341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.792115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.792165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:48.962 [2024-05-15 12:48:57.792205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.695 ms 00:27:48.962 [2024-05-15 12:48:57.792217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.792711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.792737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:48.962 [2024-05-15 12:48:57.792755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.444 ms 00:27:48.962 [2024-05-15 12:48:57.792767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.870520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.870644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:48.962 [2024-05-15 12:48:57.870672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 77.681 ms 00:27:48.962 [2024-05-15 12:48:57.870686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.903921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.904026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:48.962 [2024-05-15 12:48:57.904067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.168 ms 00:27:48.962 [2024-05-15 12:48:57.904084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.906373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.906411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:48.962 [2024-05-15 12:48:57.906450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.194 ms 00:27:48.962 [2024-05-15 12:48:57.906462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.938108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.938154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:48.962 [2024-05-15 12:48:57.938191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.526 ms 00:27:48.962 [2024-05-15 12:48:57.938202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.938257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.938274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:48.962 [2024-05-15 12:48:57.938289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:48.962 [2024-05-15 12:48:57.938300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.938420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.962 [2024-05-15 12:48:57.938438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:48.962 [2024-05-15 12:48:57.938454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:48.962 [2024-05-15 12:48:57.938467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.962 [2024-05-15 12:48:57.939843] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3574.879 ms, result 0 00:27:48.962 { 00:27:48.962 "name": "ftl", 00:27:48.962 "uuid": "55113400-f21e-4571-87ef-32d93949112a" 00:27:48.962 } 00:27:48.962 12:48:57 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:49.220 [2024-05-15 12:48:58.222848] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.477 12:48:58 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:49.736 12:48:58 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:49.994 [2024-05-15 12:48:58.811600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:49.994 12:48:58 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:50.251 [2024-05-15 12:48:59.058013] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:50.251 12:48:59 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:50.509 Fill FTL, iteration 1 00:27:50.509 12:48:59 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:50.509 12:48:59 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:50.509 12:48:59 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:50.509 12:48:59 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:50.510 12:48:59 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:50.510 12:48:59 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.510 12:48:59 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.510 12:48:59 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.510 12:48:59 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:50.510 12:48:59 -- ftl/common.sh@163 -- # spdk_ini_pid=79368 00:27:50.510 12:48:59 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:50.510 12:48:59 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:50.510 12:48:59 -- ftl/common.sh@165 -- # waitforlisten 79368 /var/tmp/spdk.tgt.sock 00:27:50.510 12:48:59 -- common/autotest_common.sh@819 -- # '[' -z 79368 ']' 00:27:50.510 12:48:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:50.510 12:48:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:50.510 12:48:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:50.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:50.510 12:48:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:50.510 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:27:50.767 [2024-05-15 12:48:59.601567] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:50.767 [2024-05-15 12:48:59.602016] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79368 ] 00:27:50.767 [2024-05-15 12:48:59.769593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.025 [2024-05-15 12:49:00.023506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:51.025 [2024-05-15 12:49:00.023736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.439 12:49:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:52.439 12:49:01 -- common/autotest_common.sh@852 -- # return 0 00:27:52.439 12:49:01 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:52.697 ftln1 00:27:52.697 12:49:01 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:52.697 12:49:01 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:52.955 12:49:01 -- ftl/common.sh@173 -- # echo ']}' 00:27:52.955 12:49:01 -- ftl/common.sh@176 -- # killprocess 79368 00:27:52.955 12:49:01 -- common/autotest_common.sh@926 -- # '[' -z 79368 ']' 00:27:52.955 12:49:01 -- common/autotest_common.sh@930 -- # kill -0 79368 00:27:52.955 12:49:01 -- common/autotest_common.sh@931 -- # uname 00:27:52.955 12:49:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:52.955 12:49:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79368 00:27:52.955 killing process with pid 79368 00:27:52.955 12:49:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:52.955 12:49:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:52.955 12:49:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79368' 00:27:52.955 12:49:01 -- common/autotest_common.sh@945 -- # kill 79368 00:27:52.955 12:49:01 -- common/autotest_common.sh@950 -- # wait 79368 00:27:55.493 12:49:04 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:55.493 12:49:04 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:55.493 [2024-05-15 12:49:04.116316] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:55.493 [2024-05-15 12:49:04.116474] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79433 ] 00:27:55.493 [2024-05-15 12:49:04.295211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.750 [2024-05-15 12:49:04.559826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.333  Copying: 206/1024 [MB] (206 MBps) Copying: 409/1024 [MB] (203 MBps) Copying: 614/1024 [MB] (205 MBps) Copying: 820/1024 [MB] (206 MBps) Copying: 1024/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:28:02.333 00:28:02.333 Calculate MD5 checksum, iteration 1 00:28:02.333 12:49:11 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:02.333 12:49:11 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:02.333 12:49:11 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:02.334 12:49:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:02.334 12:49:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:02.334 12:49:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:02.334 12:49:11 -- ftl/common.sh@154 -- # return 0 00:28:02.334 12:49:11 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:02.334 [2024-05-15 12:49:11.329782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:02.334 [2024-05-15 12:49:11.329931] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79508 ] 00:28:02.592 [2024-05-15 12:49:11.493947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.851 [2024-05-15 12:49:11.736193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.534  Copying: 505/1024 [MB] (505 MBps) Copying: 1001/1024 [MB] (496 MBps) Copying: 1024/1024 [MB] (average 500 MBps) 00:28:06.534 00:28:06.534 12:49:15 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:06.534 12:49:15 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:08.436 Fill FTL, iteration 2 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=414a2faa208ee2eb0e81b2ba8074a015 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:08.436 12:49:17 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:08.436 12:49:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:08.436 12:49:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:08.436 12:49:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:08.436 12:49:17 -- ftl/common.sh@154 -- # return 0 00:28:08.436 12:49:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:08.695 [2024-05-15 12:49:17.542481] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:08.695 [2024-05-15 12:49:17.542674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79571 ] 00:28:09.020 [2024-05-15 12:49:17.710854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.020 [2024-05-15 12:49:17.954302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.560  Copying: 205/1024 [MB] (205 MBps) Copying: 413/1024 [MB] (208 MBps) Copying: 616/1024 [MB] (203 MBps) Copying: 828/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:28:15.560 00:28:15.560 Calculate MD5 checksum, iteration 2 00:28:15.560 12:49:24 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:15.560 12:49:24 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:15.560 12:49:24 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.560 12:49:24 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:15.560 12:49:24 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:15.560 12:49:24 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:15.560 12:49:24 -- ftl/common.sh@154 -- # return 0 00:28:15.560 12:49:24 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:15.817 [2024-05-15 12:49:24.621962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:15.817 [2024-05-15 12:49:24.622131] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79646 ] 00:28:15.817 [2024-05-15 12:49:24.798646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.418 [2024-05-15 12:49:25.108367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.007  Copying: 539/1024 [MB] (539 MBps) Copying: 1024/1024 [MB] (average 537 MBps) 00:28:22.007 00:28:22.007 12:49:30 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:22.007 12:49:30 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:24.534 12:49:33 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:24.535 12:49:33 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=74d6999a11e432bbb3e576b30ad42d79 00:28:24.535 12:49:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:24.535 12:49:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:24.535 12:49:33 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:24.535 [2024-05-15 12:49:33.393159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.535 [2024-05-15 12:49:33.393230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:24.535 [2024-05-15 12:49:33.393267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:24.535 [2024-05-15 12:49:33.393278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.535 [2024-05-15 12:49:33.393316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.535 [2024-05-15 12:49:33.393331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:24.535 [2024-05-15 12:49:33.393343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:24.535 [2024-05-15 12:49:33.393354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.535 [2024-05-15 12:49:33.393380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.535 [2024-05-15 12:49:33.393393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:24.535 [2024-05-15 12:49:33.393410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:24.535 [2024-05-15 12:49:33.393420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.535 [2024-05-15 12:49:33.393504] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.342 ms, result 0 00:28:24.535 true 00:28:24.535 12:49:33 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:24.807 { 00:28:24.807 "name": "ftl", 00:28:24.807 "properties": [ 00:28:24.807 { 00:28:24.807 "name": "superblock_version", 00:28:24.807 "value": 5, 00:28:24.807 "read-only": true 00:28:24.807 }, 00:28:24.807 { 00:28:24.807 "name": "base_device", 00:28:24.808 "bands": [ 00:28:24.808 { 00:28:24.808 "id": 0, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 1, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 2, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 3, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 4, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 5, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 6, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 7, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 8, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 9, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 10, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 11, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 12, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 13, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 14, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 15, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 16, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 17, 00:28:24.808 "state": "FREE", 00:28:24.808 "validity": 0.0 00:28:24.808 } 00:28:24.808 ], 00:28:24.808 "read-only": true 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "name": "cache_device", 00:28:24.808 "type": "bdev", 00:28:24.808 "chunks": [ 00:28:24.808 { 00:28:24.808 "id": 0, 00:28:24.808 "state": "CLOSED", 00:28:24.808 "utilization": 1.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 1, 00:28:24.808 "state": "CLOSED", 00:28:24.808 "utilization": 1.0 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 2, 00:28:24.808 "state": "OPEN", 00:28:24.808 "utilization": 0.001953125 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "id": 3, 00:28:24.808 "state": "OPEN", 00:28:24.808 "utilization": 0.0 00:28:24.808 } 00:28:24.808 ], 00:28:24.808 "read-only": true 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "name": "verbose_mode", 00:28:24.808 "value": true, 00:28:24.808 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:24.808 }, 00:28:24.808 { 00:28:24.808 "name": "prep_upgrade_on_shutdown", 00:28:24.808 "value": false, 00:28:24.808 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:24.808 } 00:28:24.808 ] 00:28:24.808 } 00:28:24.808 12:49:33 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:25.066 [2024-05-15 12:49:33.903943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.066 [2024-05-15 12:49:33.904012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:25.066 [2024-05-15 12:49:33.904050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:25.066 [2024-05-15 12:49:33.904063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.066 [2024-05-15 12:49:33.904100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.066 [2024-05-15 12:49:33.904115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:25.066 [2024-05-15 12:49:33.904128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:25.066 [2024-05-15 12:49:33.904139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.066 [2024-05-15 12:49:33.904167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.066 [2024-05-15 12:49:33.904180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:25.066 [2024-05-15 12:49:33.904192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:25.066 [2024-05-15 12:49:33.904203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.066 [2024-05-15 12:49:33.904281] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.325 ms, result 0 00:28:25.066 true 00:28:25.066 12:49:33 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:25.066 12:49:33 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:25.066 12:49:33 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:25.323 12:49:34 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:25.323 12:49:34 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:25.323 12:49:34 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:25.581 [2024-05-15 12:49:34.408502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.581 [2024-05-15 12:49:34.408591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:25.581 [2024-05-15 12:49:34.408613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:25.581 [2024-05-15 12:49:34.408624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.581 [2024-05-15 12:49:34.408662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.581 [2024-05-15 12:49:34.408678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:25.581 [2024-05-15 12:49:34.408690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:25.581 [2024-05-15 12:49:34.408701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.581 [2024-05-15 12:49:34.408730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.581 [2024-05-15 12:49:34.408743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:25.581 [2024-05-15 12:49:34.408755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:25.581 [2024-05-15 12:49:34.408766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.581 [2024-05-15 12:49:34.408845] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.327 ms, result 0 00:28:25.581 true 00:28:25.581 12:49:34 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:25.840 { 00:28:25.840 "name": "ftl", 00:28:25.840 "properties": [ 00:28:25.840 { 00:28:25.840 "name": "superblock_version", 00:28:25.840 "value": 5, 00:28:25.840 "read-only": true 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "name": "base_device", 00:28:25.840 "bands": [ 00:28:25.840 { 00:28:25.840 "id": 0, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 1, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 2, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 3, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 4, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 5, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 6, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 7, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 8, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 9, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 10, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 11, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 12, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 13, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 14, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 15, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 16, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 17, 00:28:25.840 "state": "FREE", 00:28:25.840 "validity": 0.0 00:28:25.840 } 00:28:25.840 ], 00:28:25.840 "read-only": true 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "name": "cache_device", 00:28:25.840 "type": "bdev", 00:28:25.840 "chunks": [ 00:28:25.840 { 00:28:25.840 "id": 0, 00:28:25.840 "state": "CLOSED", 00:28:25.840 "utilization": 1.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 1, 00:28:25.840 "state": "CLOSED", 00:28:25.840 "utilization": 1.0 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 2, 00:28:25.840 "state": "OPEN", 00:28:25.840 "utilization": 0.001953125 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "id": 3, 00:28:25.840 "state": "OPEN", 00:28:25.840 "utilization": 0.0 00:28:25.840 } 00:28:25.840 ], 00:28:25.840 "read-only": true 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "name": "verbose_mode", 00:28:25.840 "value": true, 00:28:25.840 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:25.840 }, 00:28:25.840 { 00:28:25.840 "name": "prep_upgrade_on_shutdown", 00:28:25.840 "value": true, 00:28:25.840 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:25.840 } 00:28:25.840 ] 00:28:25.840 } 00:28:25.840 12:49:34 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:25.840 12:49:34 -- ftl/common.sh@130 -- # [[ -n 79231 ]] 00:28:25.840 12:49:34 -- ftl/common.sh@131 -- # killprocess 79231 00:28:25.840 12:49:34 -- common/autotest_common.sh@926 -- # '[' -z 79231 ']' 00:28:25.840 12:49:34 -- common/autotest_common.sh@930 -- # kill -0 79231 00:28:25.840 12:49:34 -- common/autotest_common.sh@931 -- # uname 00:28:25.840 12:49:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:25.840 12:49:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79231 00:28:25.840 killing process with pid 79231 00:28:25.840 12:49:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:25.840 12:49:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:25.840 12:49:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79231' 00:28:25.840 12:49:34 -- common/autotest_common.sh@945 -- # kill 79231 00:28:25.840 12:49:34 -- common/autotest_common.sh@950 -- # wait 79231 00:28:26.772 [2024-05-15 12:49:35.631861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:26.772 [2024-05-15 12:49:35.650081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.772 [2024-05-15 12:49:35.650142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:26.772 [2024-05-15 12:49:35.650178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:26.772 [2024-05-15 12:49:35.650191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.772 [2024-05-15 12:49:35.650223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:26.772 [2024-05-15 12:49:35.653897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.772 [2024-05-15 12:49:35.653937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:26.772 [2024-05-15 12:49:35.653953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.651 ms 00:28:26.772 [2024-05-15 12:49:35.653972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.134905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.135000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:36.739 [2024-05-15 12:49:44.135047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8480.926 ms 00:28:36.739 [2024-05-15 12:49:44.135062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.136808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.136854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:36.739 [2024-05-15 12:49:44.136873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.715 ms 00:28:36.739 [2024-05-15 12:49:44.136898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.138557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.138603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:36.739 [2024-05-15 12:49:44.138652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.535 ms 00:28:36.739 [2024-05-15 12:49:44.138666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.151807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.151852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:36.739 [2024-05-15 12:49:44.151883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.074 ms 00:28:36.739 [2024-05-15 12:49:44.151894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.159999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.160044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:36.739 [2024-05-15 12:49:44.160075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.065 ms 00:28:36.739 [2024-05-15 12:49:44.160087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.160200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.160227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:36.739 [2024-05-15 12:49:44.160239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:28:36.739 [2024-05-15 12:49:44.160250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.172523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.172720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:36.739 [2024-05-15 12:49:44.172755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.252 ms 00:28:36.739 [2024-05-15 12:49:44.172768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.185315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.185361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:36.739 [2024-05-15 12:49:44.185378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.494 ms 00:28:36.739 [2024-05-15 12:49:44.185390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.197859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.197947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:36.739 [2024-05-15 12:49:44.197977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.427 ms 00:28:36.739 [2024-05-15 12:49:44.197988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.210482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.210577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:36.739 [2024-05-15 12:49:44.210612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.391 ms 00:28:36.739 [2024-05-15 12:49:44.210624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.210676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:36.739 [2024-05-15 12:49:44.210719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:36.739 [2024-05-15 12:49:44.210735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:36.739 [2024-05-15 12:49:44.210748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:36.739 [2024-05-15 12:49:44.210760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.739 [2024-05-15 12:49:44.210960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:36.739 [2024-05-15 12:49:44.210971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 55113400-f21e-4571-87ef-32d93949112a 00:28:36.739 [2024-05-15 12:49:44.211018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:36.739 [2024-05-15 12:49:44.211029] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:36.739 [2024-05-15 12:49:44.211055] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:36.739 [2024-05-15 12:49:44.211082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:36.739 [2024-05-15 12:49:44.211107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:36.739 [2024-05-15 12:49:44.211119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:36.739 [2024-05-15 12:49:44.211129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:36.739 [2024-05-15 12:49:44.211140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:36.739 [2024-05-15 12:49:44.211149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:36.739 [2024-05-15 12:49:44.211161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.211179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:36.739 [2024-05-15 12:49:44.211202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.485 ms 00:28:36.739 [2024-05-15 12:49:44.211214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.226957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.226994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:36.739 [2024-05-15 12:49:44.227032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.695 ms 00:28:36.739 [2024-05-15 12:49:44.227042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.227290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.739 [2024-05-15 12:49:44.227306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:36.739 [2024-05-15 12:49:44.227318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:28:36.739 [2024-05-15 12:49:44.227328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.287416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.739 [2024-05-15 12:49:44.287516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:36.739 [2024-05-15 12:49:44.287539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.739 [2024-05-15 12:49:44.287551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.739 [2024-05-15 12:49:44.287617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.287632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:36.740 [2024-05-15 12:49:44.287645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.287656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.287779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.287798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:36.740 [2024-05-15 12:49:44.287818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.287830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.287855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.287868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:36.740 [2024-05-15 12:49:44.287880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.287891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.399268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.399348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:36.740 [2024-05-15 12:49:44.399383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.399395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.441601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.441675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:36.740 [2024-05-15 12:49:44.441707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.441720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.441827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.441845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:36.740 [2024-05-15 12:49:44.441872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.441912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.441969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.441985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:36.740 [2024-05-15 12:49:44.441996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.442007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.442127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.442145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:36.740 [2024-05-15 12:49:44.442157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.442168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.442221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.442237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:36.740 [2024-05-15 12:49:44.442249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.442260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.442307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.442321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:36.740 [2024-05-15 12:49:44.442333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.442343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.442406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:36.740 [2024-05-15 12:49:44.442422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:36.740 [2024-05-15 12:49:44.442434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:36.740 [2024-05-15 12:49:44.442445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.740 [2024-05-15 12:49:44.442666] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8792.578 ms, result 0 00:28:39.362 12:49:48 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:39.362 12:49:48 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:39.362 12:49:48 -- ftl/common.sh@81 -- # local base_bdev= 00:28:39.362 12:49:48 -- ftl/common.sh@82 -- # local cache_bdev= 00:28:39.362 12:49:48 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:39.362 12:49:48 -- ftl/common.sh@89 -- # spdk_tgt_pid=79873 00:28:39.362 12:49:48 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:39.362 12:49:48 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.362 12:49:48 -- ftl/common.sh@91 -- # waitforlisten 79873 00:28:39.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.362 12:49:48 -- common/autotest_common.sh@819 -- # '[' -z 79873 ']' 00:28:39.362 12:49:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.362 12:49:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:39.362 12:49:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.362 12:49:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:39.362 12:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:39.621 [2024-05-15 12:49:48.431176] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:39.621 [2024-05-15 12:49:48.431345] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79873 ] 00:28:39.621 [2024-05-15 12:49:48.603227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.878 [2024-05-15 12:49:48.841660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:39.878 [2024-05-15 12:49:48.841914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.853 [2024-05-15 12:49:49.720505] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:40.853 [2024-05-15 12:49:49.720618] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:40.853 [2024-05-15 12:49:49.863111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.853 [2024-05-15 12:49:49.863175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:40.853 [2024-05-15 12:49:49.863198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:40.853 [2024-05-15 12:49:49.863212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.853 [2024-05-15 12:49:49.863296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.853 [2024-05-15 12:49:49.863325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:40.853 [2024-05-15 12:49:49.863339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:40.853 [2024-05-15 12:49:49.863351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.853 [2024-05-15 12:49:49.863402] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:41.112 [2024-05-15 12:49:49.864357] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:41.112 [2024-05-15 12:49:49.864398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.864419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:41.112 [2024-05-15 12:49:49.864433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.014 ms 00:28:41.112 [2024-05-15 12:49:49.864445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.866433] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:41.112 [2024-05-15 12:49:49.882687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.882734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:41.112 [2024-05-15 12:49:49.882769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.256 ms 00:28:41.112 [2024-05-15 12:49:49.882782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.882861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.882880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:41.112 [2024-05-15 12:49:49.882914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:41.112 [2024-05-15 12:49:49.882925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.891900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.891959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:41.112 [2024-05-15 12:49:49.891991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.873 ms 00:28:41.112 [2024-05-15 12:49:49.892003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.892066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.892155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:41.112 [2024-05-15 12:49:49.892167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:41.112 [2024-05-15 12:49:49.892178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.892246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.892263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:41.112 [2024-05-15 12:49:49.892275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:41.112 [2024-05-15 12:49:49.892286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.892330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:41.112 [2024-05-15 12:49:49.897441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.897479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:41.112 [2024-05-15 12:49:49.897548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.125 ms 00:28:41.112 [2024-05-15 12:49:49.897564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.897610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.897630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:41.112 [2024-05-15 12:49:49.897643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:41.112 [2024-05-15 12:49:49.897655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.897727] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:41.112 [2024-05-15 12:49:49.897762] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:28:41.112 [2024-05-15 12:49:49.897803] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:41.112 [2024-05-15 12:49:49.897831] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:28:41.112 [2024-05-15 12:49:49.897920] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:41.112 [2024-05-15 12:49:49.897936] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:41.112 [2024-05-15 12:49:49.897952] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:41.112 [2024-05-15 12:49:49.897968] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:41.112 [2024-05-15 12:49:49.897982] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:41.112 [2024-05-15 12:49:49.897995] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:41.112 [2024-05-15 12:49:49.898006] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:41.112 [2024-05-15 12:49:49.898018] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:41.112 [2024-05-15 12:49:49.898044] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:41.112 [2024-05-15 12:49:49.898075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.898087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:41.112 [2024-05-15 12:49:49.898104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:28:41.112 [2024-05-15 12:49:49.898116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.898201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.112 [2024-05-15 12:49:49.898217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:41.112 [2024-05-15 12:49:49.898230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:41.112 [2024-05-15 12:49:49.898242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.112 [2024-05-15 12:49:49.898335] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:41.112 [2024-05-15 12:49:49.898360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:41.112 [2024-05-15 12:49:49.898374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:41.112 [2024-05-15 12:49:49.898392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.112 [2024-05-15 12:49:49.898404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:41.112 [2024-05-15 12:49:49.898415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:41.112 [2024-05-15 12:49:49.898427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:41.112 [2024-05-15 12:49:49.898437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:41.112 [2024-05-15 12:49:49.898449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:41.112 [2024-05-15 12:49:49.898460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.112 [2024-05-15 12:49:49.898471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:41.113 [2024-05-15 12:49:49.898481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:41.113 [2024-05-15 12:49:49.898505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:41.113 [2024-05-15 12:49:49.898533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:41.113 [2024-05-15 12:49:49.898566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:41.113 [2024-05-15 12:49:49.898578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:41.113 [2024-05-15 12:49:49.898599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:41.113 [2024-05-15 12:49:49.898626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:41.113 [2024-05-15 12:49:49.898648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:41.113 [2024-05-15 12:49:49.898680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:41.113 [2024-05-15 12:49:49.898712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:41.113 [2024-05-15 12:49:49.898744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:41.113 [2024-05-15 12:49:49.898776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:41.113 [2024-05-15 12:49:49.898810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898831] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:41.113 [2024-05-15 12:49:49.898843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:41.113 [2024-05-15 12:49:49.898854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:41.113 [2024-05-15 12:49:49.898877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:41.113 [2024-05-15 12:49:49.898889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:41.113 [2024-05-15 12:49:49.898901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:41.113 [2024-05-15 12:49:49.898913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:41.113 [2024-05-15 12:49:49.898924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:41.113 [2024-05-15 12:49:49.898935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:41.113 [2024-05-15 12:49:49.898947] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:41.113 [2024-05-15 12:49:49.898961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.898974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:41.113 [2024-05-15 12:49:49.898986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.898997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:41.113 [2024-05-15 12:49:49.899020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:41.113 [2024-05-15 12:49:49.899032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:41.113 [2024-05-15 12:49:49.899043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:41.113 [2024-05-15 12:49:49.899055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:41.113 [2024-05-15 12:49:49.899127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:41.113 [2024-05-15 12:49:49.899139] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:41.113 [2024-05-15 12:49:49.899154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899166] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:41.113 [2024-05-15 12:49:49.899178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:41.113 [2024-05-15 12:49:49.899189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:41.113 [2024-05-15 12:49:49.899201] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:41.113 [2024-05-15 12:49:49.899214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.899230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:41.113 [2024-05-15 12:49:49.899243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.922 ms 00:28:41.113 [2024-05-15 12:49:49.899254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.920545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.920614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:41.113 [2024-05-15 12:49:49.920652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.219 ms 00:28:41.113 [2024-05-15 12:49:49.920665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.920738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.920753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:41.113 [2024-05-15 12:49:49.920766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:41.113 [2024-05-15 12:49:49.920778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.962405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.962468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:41.113 [2024-05-15 12:49:49.962504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 41.535 ms 00:28:41.113 [2024-05-15 12:49:49.962558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.962658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.962675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:41.113 [2024-05-15 12:49:49.962689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:41.113 [2024-05-15 12:49:49.962701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.963341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.963365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:41.113 [2024-05-15 12:49:49.963380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:28:41.113 [2024-05-15 12:49:49.963392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.963459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.963475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:41.113 [2024-05-15 12:49:49.963488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:41.113 [2024-05-15 12:49:49.963499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:49.985557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:49.985630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:41.113 [2024-05-15 12:49:49.985652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.017 ms 00:28:41.113 [2024-05-15 12:49:49.985665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:50.003750] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:41.113 [2024-05-15 12:49:50.003828] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:41.113 [2024-05-15 12:49:50.003867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:50.003880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:41.113 [2024-05-15 12:49:50.003897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.997 ms 00:28:41.113 [2024-05-15 12:49:50.003925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:50.021334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:50.021405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:41.113 [2024-05-15 12:49:50.021441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.304 ms 00:28:41.113 [2024-05-15 12:49:50.021456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.113 [2024-05-15 12:49:50.037048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.113 [2024-05-15 12:49:50.037118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:41.113 [2024-05-15 12:49:50.037154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.473 ms 00:28:41.113 [2024-05-15 12:49:50.037167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.114 [2024-05-15 12:49:50.052940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.114 [2024-05-15 12:49:50.053032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:41.114 [2024-05-15 12:49:50.053067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.686 ms 00:28:41.114 [2024-05-15 12:49:50.053078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.114 [2024-05-15 12:49:50.053642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.114 [2024-05-15 12:49:50.053686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:41.114 [2024-05-15 12:49:50.053702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:28:41.114 [2024-05-15 12:49:50.053715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.134767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.134844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:41.372 [2024-05-15 12:49:50.134867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 81.020 ms 00:28:41.372 [2024-05-15 12:49:50.134880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.150078] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:41.372 [2024-05-15 12:49:50.151478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.151558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:41.372 [2024-05-15 12:49:50.151580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.505 ms 00:28:41.372 [2024-05-15 12:49:50.151598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.151727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.151751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:41.372 [2024-05-15 12:49:50.151764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:41.372 [2024-05-15 12:49:50.151777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.151855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.151879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:41.372 [2024-05-15 12:49:50.151894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:41.372 [2024-05-15 12:49:50.151906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.154071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.154112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:41.372 [2024-05-15 12:49:50.154149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.131 ms 00:28:41.372 [2024-05-15 12:49:50.154161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.154205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.154221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:41.372 [2024-05-15 12:49:50.154234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:41.372 [2024-05-15 12:49:50.154246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.154297] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:41.372 [2024-05-15 12:49:50.154315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.154327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:41.372 [2024-05-15 12:49:50.154340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:41.372 [2024-05-15 12:49:50.154357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.184588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.184643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:41.372 [2024-05-15 12:49:50.184677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.201 ms 00:28:41.372 [2024-05-15 12:49:50.184689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.184779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.372 [2024-05-15 12:49:50.184798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:41.372 [2024-05-15 12:49:50.184818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:41.372 [2024-05-15 12:49:50.184830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.372 [2024-05-15 12:49:50.186285] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 322.643 ms, result 0 00:28:41.372 [2024-05-15 12:49:50.201008] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.372 [2024-05-15 12:49:50.217052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:28:41.372 [2024-05-15 12:49:50.226847] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:41.939 12:49:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:41.939 12:49:50 -- common/autotest_common.sh@852 -- # return 0 00:28:41.939 12:49:50 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:41.939 12:49:50 -- ftl/common.sh@95 -- # return 0 00:28:41.939 12:49:50 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:42.197 [2024-05-15 12:49:50.964551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.197 [2024-05-15 12:49:50.964626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:42.197 [2024-05-15 12:49:50.964665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:42.197 [2024-05-15 12:49:50.964678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.197 [2024-05-15 12:49:50.964724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.197 [2024-05-15 12:49:50.964741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:42.197 [2024-05-15 12:49:50.964754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:42.197 [2024-05-15 12:49:50.964765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.197 [2024-05-15 12:49:50.964793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.197 [2024-05-15 12:49:50.964807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:42.197 [2024-05-15 12:49:50.964820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:42.197 [2024-05-15 12:49:50.964831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.197 [2024-05-15 12:49:50.964914] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.361 ms, result 0 00:28:42.197 true 00:28:42.197 12:49:50 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:42.197 { 00:28:42.197 "name": "ftl", 00:28:42.197 "properties": [ 00:28:42.197 { 00:28:42.197 "name": "superblock_version", 00:28:42.197 "value": 5, 00:28:42.197 "read-only": true 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "name": "base_device", 00:28:42.197 "bands": [ 00:28:42.197 { 00:28:42.197 "id": 0, 00:28:42.197 "state": "CLOSED", 00:28:42.197 "validity": 1.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 1, 00:28:42.197 "state": "CLOSED", 00:28:42.197 "validity": 1.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 2, 00:28:42.197 "state": "CLOSED", 00:28:42.197 "validity": 0.007843137254901933 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 3, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 4, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 5, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 6, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 7, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 8, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 9, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 10, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 11, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 12, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 13, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 14, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 15, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 16, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 17, 00:28:42.197 "state": "FREE", 00:28:42.197 "validity": 0.0 00:28:42.197 } 00:28:42.197 ], 00:28:42.197 "read-only": true 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "name": "cache_device", 00:28:42.197 "type": "bdev", 00:28:42.197 "chunks": [ 00:28:42.197 { 00:28:42.197 "id": 0, 00:28:42.197 "state": "OPEN", 00:28:42.197 "utilization": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 1, 00:28:42.197 "state": "OPEN", 00:28:42.197 "utilization": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 2, 00:28:42.197 "state": "FREE", 00:28:42.197 "utilization": 0.0 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "id": 3, 00:28:42.197 "state": "FREE", 00:28:42.197 "utilization": 0.0 00:28:42.197 } 00:28:42.197 ], 00:28:42.197 "read-only": true 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "name": "verbose_mode", 00:28:42.197 "value": true, 00:28:42.197 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:42.197 }, 00:28:42.197 { 00:28:42.197 "name": "prep_upgrade_on_shutdown", 00:28:42.197 "value": false, 00:28:42.197 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:42.197 } 00:28:42.197 ] 00:28:42.197 } 00:28:42.197 12:49:51 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:42.197 12:49:51 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:42.197 12:49:51 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:42.455 12:49:51 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:42.455 12:49:51 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:42.455 12:49:51 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:42.455 12:49:51 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:42.455 12:49:51 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:42.713 Validate MD5 checksum, iteration 1 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:42.713 12:49:51 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:42.713 12:49:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:42.713 12:49:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:42.713 12:49:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:42.713 12:49:51 -- ftl/common.sh@154 -- # return 0 00:28:42.713 12:49:51 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:42.971 [2024-05-15 12:49:51.798566] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:42.971 [2024-05-15 12:49:51.798971] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79925 ] 00:28:42.971 [2024-05-15 12:49:51.967666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.230 [2024-05-15 12:49:52.207716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.959  Copying: 509/1024 [MB] (509 MBps) Copying: 994/1024 [MB] (485 MBps) Copying: 1024/1024 [MB] (average 497 MBps) 00:28:47.959 00:28:47.959 12:49:56 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:47.959 12:49:56 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:49.857 Validate MD5 checksum, iteration 2 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@103 -- # sum=414a2faa208ee2eb0e81b2ba8074a015 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@105 -- # [[ 414a2faa208ee2eb0e81b2ba8074a015 != \4\1\4\a\2\f\a\a\2\0\8\e\e\2\e\b\0\e\8\1\b\2\b\a\8\0\7\4\a\0\1\5 ]] 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:49.857 12:49:58 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.857 12:49:58 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:49.857 12:49:58 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:49.857 12:49:58 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:49.857 12:49:58 -- ftl/common.sh@154 -- # return 0 00:28:49.857 12:49:58 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.857 [2024-05-15 12:49:58.809705] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:49.857 [2024-05-15 12:49:58.809859] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80005 ] 00:28:50.114 [2024-05-15 12:49:58.985295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.372 [2024-05-15 12:49:59.247396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.107  Copying: 455/1024 [MB] (455 MBps) Copying: 942/1024 [MB] (487 MBps) Copying: 1024/1024 [MB] (average 471 MBps) 00:28:55.107 00:28:55.107 12:50:03 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:55.107 12:50:03 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@103 -- # sum=74d6999a11e432bbb3e576b30ad42d79 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@105 -- # [[ 74d6999a11e432bbb3e576b30ad42d79 != \7\4\d\6\9\9\9\a\1\1\e\4\3\2\b\b\b\3\e\5\7\6\b\3\0\a\d\4\2\d\7\9 ]] 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:57.026 12:50:06 -- ftl/common.sh@137 -- # [[ -n 79873 ]] 00:28:57.026 12:50:06 -- ftl/common.sh@138 -- # kill -9 79873 00:28:57.026 12:50:06 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:57.026 12:50:06 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:57.026 12:50:06 -- ftl/common.sh@81 -- # local base_bdev= 00:28:57.026 12:50:06 -- ftl/common.sh@82 -- # local cache_bdev= 00:28:57.026 12:50:06 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:57.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.026 12:50:06 -- ftl/common.sh@89 -- # spdk_tgt_pid=80079 00:28:57.026 12:50:06 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:57.026 12:50:06 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:57.026 12:50:06 -- ftl/common.sh@91 -- # waitforlisten 80079 00:28:57.026 12:50:06 -- common/autotest_common.sh@819 -- # '[' -z 80079 ']' 00:28:57.026 12:50:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.026 12:50:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:57.026 12:50:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.026 12:50:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:57.026 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:28:57.285 [2024-05-15 12:50:06.147822] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:57.285 [2024-05-15 12:50:06.148013] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80079 ] 00:28:57.542 [2024-05-15 12:50:06.322121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 79873 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:57.800 [2024-05-15 12:50:06.562648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:57.800 [2024-05-15 12:50:06.562913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.736 [2024-05-15 12:50:07.433964] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:58.736 [2024-05-15 12:50:07.434065] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:58.736 [2024-05-15 12:50:07.577141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.577215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:58.736 [2024-05-15 12:50:07.577253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:58.736 [2024-05-15 12:50:07.577264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.577343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.577382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:58.736 [2024-05-15 12:50:07.577395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:28:58.736 [2024-05-15 12:50:07.577406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.577446] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:58.736 [2024-05-15 12:50:07.578479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:58.736 [2024-05-15 12:50:07.578538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.578560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:58.736 [2024-05-15 12:50:07.578573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:28:58.736 [2024-05-15 12:50:07.578584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.579068] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:58.736 [2024-05-15 12:50:07.601434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.601542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:58.736 [2024-05-15 12:50:07.601566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.364 ms 00:28:58.736 [2024-05-15 12:50:07.601580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.614490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.614579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:58.736 [2024-05-15 12:50:07.614599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:58.736 [2024-05-15 12:50:07.614612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.615161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.615189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:58.736 [2024-05-15 12:50:07.615205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:28:58.736 [2024-05-15 12:50:07.615217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.615276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.615295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:58.736 [2024-05-15 12:50:07.615308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:58.736 [2024-05-15 12:50:07.615320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.615367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.615387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:58.736 [2024-05-15 12:50:07.615399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:58.736 [2024-05-15 12:50:07.615411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.615459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:58.736 [2024-05-15 12:50:07.619619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.619660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:58.736 [2024-05-15 12:50:07.619707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.179 ms 00:28:58.736 [2024-05-15 12:50:07.619719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.619759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.619777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:58.736 [2024-05-15 12:50:07.619790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:58.736 [2024-05-15 12:50:07.619802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.619851] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:58.736 [2024-05-15 12:50:07.619886] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:28:58.736 [2024-05-15 12:50:07.619926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:58.736 [2024-05-15 12:50:07.619954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:28:58.736 [2024-05-15 12:50:07.620037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:58.736 [2024-05-15 12:50:07.620053] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:58.736 [2024-05-15 12:50:07.620067] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:58.736 [2024-05-15 12:50:07.620082] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:58.736 [2024-05-15 12:50:07.620105] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:58.736 [2024-05-15 12:50:07.620118] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:58.736 [2024-05-15 12:50:07.620129] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:58.736 [2024-05-15 12:50:07.620140] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:58.736 [2024-05-15 12:50:07.620151] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:58.736 [2024-05-15 12:50:07.620163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.620175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:58.736 [2024-05-15 12:50:07.620187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:28:58.736 [2024-05-15 12:50:07.620198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.620272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.736 [2024-05-15 12:50:07.620305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:58.736 [2024-05-15 12:50:07.620322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:58.736 [2024-05-15 12:50:07.620334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.736 [2024-05-15 12:50:07.620430] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:58.736 [2024-05-15 12:50:07.620459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:58.736 [2024-05-15 12:50:07.620472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:58.737 [2024-05-15 12:50:07.620508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:58.737 [2024-05-15 12:50:07.620535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:58.737 [2024-05-15 12:50:07.620575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:58.737 [2024-05-15 12:50:07.620587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:58.737 [2024-05-15 12:50:07.620609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:58.737 [2024-05-15 12:50:07.620620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:58.737 [2024-05-15 12:50:07.620642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:58.737 [2024-05-15 12:50:07.620676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:58.737 [2024-05-15 12:50:07.620688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:58.737 [2024-05-15 12:50:07.620710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:58.737 [2024-05-15 12:50:07.620721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:58.737 [2024-05-15 12:50:07.620742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:58.737 [2024-05-15 12:50:07.620753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:58.737 [2024-05-15 12:50:07.620774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:58.737 [2024-05-15 12:50:07.620785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:58.737 [2024-05-15 12:50:07.620806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:58.737 [2024-05-15 12:50:07.620817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:58.737 [2024-05-15 12:50:07.620838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:58.737 [2024-05-15 12:50:07.620848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:58.737 [2024-05-15 12:50:07.620868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:58.737 [2024-05-15 12:50:07.620879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620889] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:58.737 [2024-05-15 12:50:07.620900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620920] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:58.737 [2024-05-15 12:50:07.620932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:58.737 [2024-05-15 12:50:07.620943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:58.737 [2024-05-15 12:50:07.620961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.737 [2024-05-15 12:50:07.620973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:58.737 [2024-05-15 12:50:07.620984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:58.737 [2024-05-15 12:50:07.620995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:58.737 [2024-05-15 12:50:07.621007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:58.737 [2024-05-15 12:50:07.621018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:58.737 [2024-05-15 12:50:07.621029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:58.737 [2024-05-15 12:50:07.621042] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:58.737 [2024-05-15 12:50:07.621056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:58.737 [2024-05-15 12:50:07.621082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:58.737 [2024-05-15 12:50:07.621119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:58.737 [2024-05-15 12:50:07.621131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:58.737 [2024-05-15 12:50:07.621143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:58.737 [2024-05-15 12:50:07.621155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:58.737 [2024-05-15 12:50:07.621232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:58.737 [2024-05-15 12:50:07.621244] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:58.737 [2024-05-15 12:50:07.621257] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.737 [2024-05-15 12:50:07.621282] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:58.737 [2024-05-15 12:50:07.621294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:58.737 [2024-05-15 12:50:07.621306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:58.737 [2024-05-15 12:50:07.621319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.621332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:58.737 [2024-05-15 12:50:07.621344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:28:58.737 [2024-05-15 12:50:07.621356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.642027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.642086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:58.737 [2024-05-15 12:50:07.642123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.601 ms 00:28:58.737 [2024-05-15 12:50:07.642135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.642212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.642235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:58.737 [2024-05-15 12:50:07.642248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:58.737 [2024-05-15 12:50:07.642260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.686765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.686826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:58.737 [2024-05-15 12:50:07.686847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.418 ms 00:28:58.737 [2024-05-15 12:50:07.686860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.686943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.686961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:58.737 [2024-05-15 12:50:07.686975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:58.737 [2024-05-15 12:50:07.686994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.687150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.687170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:58.737 [2024-05-15 12:50:07.687183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:28:58.737 [2024-05-15 12:50:07.687195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.687253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.687270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:58.737 [2024-05-15 12:50:07.687283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:58.737 [2024-05-15 12:50:07.687295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.708997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.709059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:58.737 [2024-05-15 12:50:07.709095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.662 ms 00:28:58.737 [2024-05-15 12:50:07.709114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.709323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.737 [2024-05-15 12:50:07.709346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:58.737 [2024-05-15 12:50:07.709361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:58.737 [2024-05-15 12:50:07.709372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.737 [2024-05-15 12:50:07.731258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.738 [2024-05-15 12:50:07.731324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:58.738 [2024-05-15 12:50:07.731378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.846 ms 00:28:58.738 [2024-05-15 12:50:07.731391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.738 [2024-05-15 12:50:07.744773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.738 [2024-05-15 12:50:07.745054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:58.738 [2024-05-15 12:50:07.745094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:28:58.738 [2024-05-15 12:50:07.745107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.826213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.826295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:59.046 [2024-05-15 12:50:07.826333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 80.990 ms 00:28:59.046 [2024-05-15 12:50:07.826347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.826529] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:59.046 [2024-05-15 12:50:07.826588] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:59.046 [2024-05-15 12:50:07.826636] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:59.046 [2024-05-15 12:50:07.826684] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:59.046 [2024-05-15 12:50:07.826699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.826716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:59.046 [2024-05-15 12:50:07.826729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.239 ms 00:28:59.046 [2024-05-15 12:50:07.826741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.826855] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:59.046 [2024-05-15 12:50:07.826877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.826895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:59.046 [2024-05-15 12:50:07.826908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:59.046 [2024-05-15 12:50:07.826920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.847766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.847832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:59.046 [2024-05-15 12:50:07.847853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.810 ms 00:28:59.046 [2024-05-15 12:50:07.847866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.860062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.860104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:59.046 [2024-05-15 12:50:07.860137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:59.046 [2024-05-15 12:50:07.860149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.046 [2024-05-15 12:50:07.860251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.046 [2024-05-15 12:50:07.860270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:28:59.046 [2024-05-15 12:50:07.860290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:59.047 [2024-05-15 12:50:07.860301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.047 [2024-05-15 12:50:07.860571] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:28:59.614 [2024-05-15 12:50:08.405490] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:28:59.614 [2024-05-15 12:50:08.405735] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:29:00.179 [2024-05-15 12:50:08.884225] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:29:00.179 [2024-05-15 12:50:08.884377] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:00.180 [2024-05-15 12:50:08.884401] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:00.180 [2024-05-15 12:50:08.884418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.884432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:00.180 [2024-05-15 12:50:08.884450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1024.088 ms 00:29:00.180 [2024-05-15 12:50:08.884463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.884545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.884565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:00.180 [2024-05-15 12:50:08.884580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:00.180 [2024-05-15 12:50:08.884606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.898081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:00.180 [2024-05-15 12:50:08.898257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.898278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:00.180 [2024-05-15 12:50:08.898293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.624 ms 00:29:00.180 [2024-05-15 12:50:08.898306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.899095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.899130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:29:00.180 [2024-05-15 12:50:08.899146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.662 ms 00:29:00.180 [2024-05-15 12:50:08.899164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.901612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.901646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:00.180 [2024-05-15 12:50:08.901661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.419 ms 00:29:00.180 [2024-05-15 12:50:08.901673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.932796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.932846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:29:00.180 [2024-05-15 12:50:08.932864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.088 ms 00:29:00.180 [2024-05-15 12:50:08.932883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.933034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.933056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:00.180 [2024-05-15 12:50:08.933071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:00.180 [2024-05-15 12:50:08.933083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.935288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.935327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:29:00.180 [2024-05-15 12:50:08.935344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.176 ms 00:29:00.180 [2024-05-15 12:50:08.935356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.935406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.935423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:00.180 [2024-05-15 12:50:08.935436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:00.180 [2024-05-15 12:50:08.935447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.935515] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:00.180 [2024-05-15 12:50:08.935536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.935548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:00.180 [2024-05-15 12:50:08.935560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:00.180 [2024-05-15 12:50:08.935580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.935658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.180 [2024-05-15 12:50:08.935675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:00.180 [2024-05-15 12:50:08.935688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:00.180 [2024-05-15 12:50:08.935699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.180 [2024-05-15 12:50:08.937021] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1359.358 ms, result 0 00:29:00.180 [2024-05-15 12:50:08.949453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.180 [2024-05-15 12:50:08.965475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:29:00.180 [2024-05-15 12:50:08.975313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:00.439 Validate MD5 checksum, iteration 1 00:29:00.439 12:50:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.439 12:50:09 -- common/autotest_common.sh@852 -- # return 0 00:29:00.439 12:50:09 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:00.439 12:50:09 -- ftl/common.sh@95 -- # return 0 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:00.439 12:50:09 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:00.439 12:50:09 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:00.439 12:50:09 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:00.439 12:50:09 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:00.439 12:50:09 -- ftl/common.sh@154 -- # return 0 00:29:00.439 12:50:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:00.439 [2024-05-15 12:50:09.402410] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:00.439 [2024-05-15 12:50:09.402614] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80122 ] 00:29:00.697 [2024-05-15 12:50:09.581624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.955 [2024-05-15 12:50:09.875212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.165  Copying: 509/1024 [MB] (509 MBps) Copying: 981/1024 [MB] (472 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:29:05.165 00:29:05.165 12:50:14 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:05.165 12:50:14 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:07.784 Validate MD5 checksum, iteration 2 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@103 -- # sum=414a2faa208ee2eb0e81b2ba8074a015 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@105 -- # [[ 414a2faa208ee2eb0e81b2ba8074a015 != \4\1\4\a\2\f\a\a\2\0\8\e\e\2\e\b\0\e\8\1\b\2\b\a\8\0\7\4\a\0\1\5 ]] 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:07.784 12:50:16 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:07.784 12:50:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:07.784 12:50:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:07.784 12:50:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:07.784 12:50:16 -- ftl/common.sh@154 -- # return 0 00:29:07.784 12:50:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:07.784 [2024-05-15 12:50:16.341789] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:07.784 [2024-05-15 12:50:16.342002] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80197 ] 00:29:07.784 [2024-05-15 12:50:16.503316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.784 [2024-05-15 12:50:16.756464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.291  Copying: 484/1024 [MB] (484 MBps) Copying: 919/1024 [MB] (435 MBps) Copying: 1024/1024 [MB] (average 456 MBps) 00:29:12.291 00:29:12.291 12:50:21 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:12.291 12:50:21 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@103 -- # sum=74d6999a11e432bbb3e576b30ad42d79 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@105 -- # [[ 74d6999a11e432bbb3e576b30ad42d79 != \7\4\d\6\9\9\9\a\1\1\e\4\3\2\b\b\b\3\e\5\7\6\b\3\0\a\d\4\2\d\7\9 ]] 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:14.842 12:50:23 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:14.842 12:50:23 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:14.842 12:50:23 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:14.842 12:50:23 -- ftl/common.sh@130 -- # [[ -n 80079 ]] 00:29:14.842 12:50:23 -- ftl/common.sh@131 -- # killprocess 80079 00:29:14.842 12:50:23 -- common/autotest_common.sh@926 -- # '[' -z 80079 ']' 00:29:14.842 12:50:23 -- common/autotest_common.sh@930 -- # kill -0 80079 00:29:14.842 12:50:23 -- common/autotest_common.sh@931 -- # uname 00:29:14.842 12:50:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.842 12:50:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80079 00:29:14.842 killing process with pid 80079 00:29:14.842 12:50:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:14.842 12:50:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:14.842 12:50:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80079' 00:29:14.842 12:50:23 -- common/autotest_common.sh@945 -- # kill 80079 00:29:14.843 12:50:23 -- common/autotest_common.sh@950 -- # wait 80079 00:29:15.777 [2024-05-15 12:50:24.649627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:29:15.777 [2024-05-15 12:50:24.668072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.668119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:15.777 [2024-05-15 12:50:24.668154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:15.777 [2024-05-15 12:50:24.668167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.668197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:15.777 [2024-05-15 12:50:24.671954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.671989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:15.777 [2024-05-15 12:50:24.672011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.720 ms 00:29:15.777 [2024-05-15 12:50:24.672032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.672291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.672316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:15.777 [2024-05-15 12:50:24.672329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:29:15.777 [2024-05-15 12:50:24.672340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.673737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.673778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:15.777 [2024-05-15 12:50:24.673794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.372 ms 00:29:15.777 [2024-05-15 12:50:24.673806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.675040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.675073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:29:15.777 [2024-05-15 12:50:24.675088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.183 ms 00:29:15.777 [2024-05-15 12:50:24.675100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.689097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.689170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:15.777 [2024-05-15 12:50:24.689201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.933 ms 00:29:15.777 [2024-05-15 12:50:24.689234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.696265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.696312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:15.777 [2024-05-15 12:50:24.696330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.959 ms 00:29:15.777 [2024-05-15 12:50:24.696342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.696455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.696476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:15.777 [2024-05-15 12:50:24.696518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:29:15.777 [2024-05-15 12:50:24.696534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.709411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.709454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:15.777 [2024-05-15 12:50:24.709486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.832 ms 00:29:15.777 [2024-05-15 12:50:24.709497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.722334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.722379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:15.777 [2024-05-15 12:50:24.722412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.728 ms 00:29:15.777 [2024-05-15 12:50:24.722422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.734606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.734660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:15.777 [2024-05-15 12:50:24.734694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.135 ms 00:29:15.777 [2024-05-15 12:50:24.734706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.777 [2024-05-15 12:50:24.746717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.777 [2024-05-15 12:50:24.746759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:15.778 [2024-05-15 12:50:24.746791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.928 ms 00:29:15.778 [2024-05-15 12:50:24.746801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.778 [2024-05-15 12:50:24.746841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:15.778 [2024-05-15 12:50:24.746866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:15.778 [2024-05-15 12:50:24.746880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:15.778 [2024-05-15 12:50:24.746893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:15.778 [2024-05-15 12:50:24.746905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.746997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:15.778 [2024-05-15 12:50:24.747078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:15.778 [2024-05-15 12:50:24.747106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 55113400-f21e-4571-87ef-32d93949112a 00:29:15.778 [2024-05-15 12:50:24.747118] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:15.778 [2024-05-15 12:50:24.747130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:15.778 [2024-05-15 12:50:24.747141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:15.778 [2024-05-15 12:50:24.747152] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:15.778 [2024-05-15 12:50:24.747163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:15.778 [2024-05-15 12:50:24.747174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:15.778 [2024-05-15 12:50:24.747185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:15.778 [2024-05-15 12:50:24.747195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:15.778 [2024-05-15 12:50:24.747205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:15.778 [2024-05-15 12:50:24.747217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.778 [2024-05-15 12:50:24.747233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:15.778 [2024-05-15 12:50:24.747246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:29:15.778 [2024-05-15 12:50:24.747257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.778 [2024-05-15 12:50:24.763929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.778 [2024-05-15 12:50:24.763969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:15.778 [2024-05-15 12:50:24.764001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.645 ms 00:29:15.778 [2024-05-15 12:50:24.764014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.778 [2024-05-15 12:50:24.764274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.778 [2024-05-15 12:50:24.764290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:15.778 [2024-05-15 12:50:24.764303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:29:15.778 [2024-05-15 12:50:24.764315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.036 [2024-05-15 12:50:24.824402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.036 [2024-05-15 12:50:24.824468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:16.036 [2024-05-15 12:50:24.824486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.036 [2024-05-15 12:50:24.824531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.036 [2024-05-15 12:50:24.824621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.036 [2024-05-15 12:50:24.824637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:16.036 [2024-05-15 12:50:24.824650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.036 [2024-05-15 12:50:24.824661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.036 [2024-05-15 12:50:24.824783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.824802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:16.037 [2024-05-15 12:50:24.824816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.824828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.824853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.824874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:16.037 [2024-05-15 12:50:24.824886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.824898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.927456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.927532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:16.037 [2024-05-15 12:50:24.927583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.927597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:16.037 [2024-05-15 12:50:24.968224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:16.037 [2024-05-15 12:50:24.968386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:16.037 [2024-05-15 12:50:24.968490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:16.037 [2024-05-15 12:50:24.968736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:16.037 [2024-05-15 12:50:24.968823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.968893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.968908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:16.037 [2024-05-15 12:50:24.968920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.968945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.969012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.037 [2024-05-15 12:50:24.969027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:16.037 [2024-05-15 12:50:24.969044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.037 [2024-05-15 12:50:24.969054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.037 [2024-05-15 12:50:24.969216] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 301.091 ms, result 0 00:29:17.410 12:50:26 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:17.410 12:50:26 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:17.410 12:50:26 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:17.410 12:50:26 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:17.410 12:50:26 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:17.410 12:50:26 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:17.410 Remove shared memory files 00:29:17.410 12:50:26 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:17.410 12:50:26 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:17.410 12:50:26 -- ftl/common.sh@205 -- # rm -f rm -f 00:29:17.410 12:50:26 -- ftl/common.sh@206 -- # rm -f rm -f 00:29:17.410 12:50:26 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79873 00:29:17.410 12:50:26 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:17.410 12:50:26 -- ftl/common.sh@209 -- # rm -f rm -f 00:29:17.410 ************************************ 00:29:17.410 END TEST ftl_upgrade_shutdown 00:29:17.410 ************************************ 00:29:17.410 00:29:17.410 real 1m36.615s 00:29:17.410 user 2m19.092s 00:29:17.410 sys 0m24.155s 00:29:17.410 12:50:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.410 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:29:17.410 Process with pid 72411 is not found 00:29:17.410 12:50:26 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:29:17.410 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:29:17.410 12:50:26 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:29:17.410 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:29:17.410 12:50:26 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:17.410 12:50:26 -- ftl/ftl.sh@14 -- # killprocess 72411 00:29:17.410 12:50:26 -- common/autotest_common.sh@926 -- # '[' -z 72411 ']' 00:29:17.410 12:50:26 -- common/autotest_common.sh@930 -- # kill -0 72411 00:29:17.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (72411) - No such process 00:29:17.411 12:50:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 72411 is not found' 00:29:17.411 12:50:26 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:29:17.411 12:50:26 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80332 00:29:17.411 12:50:26 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:17.411 12:50:26 -- ftl/ftl.sh@20 -- # waitforlisten 80332 00:29:17.411 12:50:26 -- common/autotest_common.sh@819 -- # '[' -z 80332 ']' 00:29:17.411 12:50:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.411 12:50:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:17.411 12:50:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.411 12:50:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:17.411 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:29:17.411 [2024-05-15 12:50:26.390876] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:17.411 [2024-05-15 12:50:26.391769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80332 ] 00:29:17.669 [2024-05-15 12:50:26.553132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.928 [2024-05-15 12:50:26.780570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.928 [2024-05-15 12:50:26.781077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.304 12:50:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:19.304 12:50:28 -- common/autotest_common.sh@852 -- # return 0 00:29:19.304 12:50:28 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:29:19.304 nvme0n1 00:29:19.304 12:50:28 -- ftl/ftl.sh@22 -- # clear_lvols 00:29:19.562 12:50:28 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:19.562 12:50:28 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:19.562 12:50:28 -- ftl/common.sh@28 -- # stores=342e5504-974f-4553-a377-1889b02a030b 00:29:19.562 12:50:28 -- ftl/common.sh@29 -- # for lvs in $stores 00:29:19.562 12:50:28 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 342e5504-974f-4553-a377-1889b02a030b 00:29:19.821 12:50:28 -- ftl/ftl.sh@23 -- # killprocess 80332 00:29:19.821 12:50:28 -- common/autotest_common.sh@926 -- # '[' -z 80332 ']' 00:29:19.821 12:50:28 -- common/autotest_common.sh@930 -- # kill -0 80332 00:29:19.821 12:50:28 -- common/autotest_common.sh@931 -- # uname 00:29:19.821 12:50:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.821 12:50:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80332 00:29:19.821 killing process with pid 80332 00:29:19.821 12:50:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:19.821 12:50:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:19.821 12:50:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80332' 00:29:19.821 12:50:28 -- common/autotest_common.sh@945 -- # kill 80332 00:29:19.821 12:50:28 -- common/autotest_common.sh@950 -- # wait 80332 00:29:22.350 12:50:30 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:22.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:22.350 Waiting for block devices as requested 00:29:22.350 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:29:22.350 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:29:22.608 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:22.608 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.868 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:29:27.868 Remove shared memory files 00:29:27.868 12:50:36 -- ftl/ftl.sh@28 -- # remove_shm 00:29:27.868 12:50:36 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:27.868 12:50:36 -- ftl/common.sh@205 -- # rm -f rm -f 00:29:27.868 12:50:36 -- ftl/common.sh@206 -- # rm -f rm -f 00:29:27.868 12:50:36 -- ftl/common.sh@207 -- # rm -f rm -f 00:29:27.868 12:50:36 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:27.868 12:50:36 -- ftl/common.sh@209 -- # rm -f rm -f 00:29:27.868 ************************************ 00:29:27.868 END TEST ftl 00:29:27.868 ************************************ 00:29:27.868 00:29:27.868 real 11m48.807s 00:29:27.868 user 14m44.700s 00:29:27.868 sys 1m34.390s 00:29:27.868 12:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.868 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 12:50:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:27.868 12:50:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:27.868 12:50:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:27.868 12:50:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:27.868 12:50:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:27.868 12:50:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:27.868 12:50:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:27.868 12:50:36 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:27.868 12:50:36 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:27.868 12:50:36 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:27.868 12:50:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:27.868 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 12:50:36 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:27.868 12:50:36 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:27.868 12:50:36 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:27.868 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:29.244 INFO: APP EXITING 00:29:29.244 INFO: killing all VMs 00:29:29.244 INFO: killing vhost app 00:29:29.244 INFO: EXIT DONE 00:29:29.811 lsblk: /dev/nvme0c0n1: not a block device 00:29:30.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:30.069 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:29:30.069 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:29:30.069 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:29:30.069 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:29:31.004 lsblk: /dev/nvme0c0n1: not a block device 00:29:31.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:31.004 Cleaning 00:29:31.004 Removing: /var/run/dpdk/spdk0/config 00:29:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:31.004 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:31.004 Removing: /var/run/dpdk/spdk0 00:29:31.004 Removing: /var/run/dpdk/spdk_pid56840 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57061 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57366 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57476 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57576 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57691 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57798 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57837 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57879 00:29:31.004 Removing: /var/run/dpdk/spdk_pid57946 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58052 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58507 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58584 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58668 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58691 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58841 00:29:31.004 Removing: /var/run/dpdk/spdk_pid58865 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59015 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59039 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59108 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59134 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59198 00:29:31.004 Removing: /var/run/dpdk/spdk_pid59229 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59414 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59452 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59532 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59620 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59657 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59735 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59763 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59810 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59847 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59888 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59925 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59966 00:29:31.264 Removing: /var/run/dpdk/spdk_pid59999 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60044 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60076 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60122 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60154 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60199 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60232 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60273 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60310 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60352 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60388 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60435 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60465 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60513 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60539 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60591 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60623 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60669 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60706 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60753 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60779 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60831 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60857 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60909 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60935 00:29:31.264 Removing: /var/run/dpdk/spdk_pid60987 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61017 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61071 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61106 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61155 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61187 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61228 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61265 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61313 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61394 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61509 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61687 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61789 00:29:31.264 Removing: /var/run/dpdk/spdk_pid61832 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62319 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62484 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62594 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62654 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62685 00:29:31.264 Removing: /var/run/dpdk/spdk_pid62760 00:29:31.264 Removing: /var/run/dpdk/spdk_pid63453 00:29:31.264 Removing: /var/run/dpdk/spdk_pid63499 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64022 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64131 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64240 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64299 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64330 00:29:31.264 Removing: /var/run/dpdk/spdk_pid64361 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66323 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66475 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66479 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66496 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66541 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66545 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66557 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66607 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66612 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66624 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66669 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66678 00:29:31.264 Removing: /var/run/dpdk/spdk_pid66696 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68142 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68249 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68394 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68526 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68663 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68801 00:29:31.264 Removing: /var/run/dpdk/spdk_pid68950 00:29:31.264 Removing: /var/run/dpdk/spdk_pid69031 00:29:31.264 Removing: /var/run/dpdk/spdk_pid69171 00:29:31.264 Removing: /var/run/dpdk/spdk_pid69567 00:29:31.264 Removing: /var/run/dpdk/spdk_pid69609 00:29:31.264 Removing: /var/run/dpdk/spdk_pid70089 00:29:31.264 Removing: /var/run/dpdk/spdk_pid70281 00:29:31.264 Removing: /var/run/dpdk/spdk_pid70386 00:29:31.264 Removing: /var/run/dpdk/spdk_pid70500 00:29:31.264 Removing: /var/run/dpdk/spdk_pid70558 00:29:31.523 Removing: /var/run/dpdk/spdk_pid70585 00:29:31.523 Removing: /var/run/dpdk/spdk_pid70905 00:29:31.523 Removing: /var/run/dpdk/spdk_pid70973 00:29:31.523 Removing: /var/run/dpdk/spdk_pid71053 00:29:31.523 Removing: /var/run/dpdk/spdk_pid71456 00:29:31.523 Removing: /var/run/dpdk/spdk_pid71610 00:29:31.523 Removing: /var/run/dpdk/spdk_pid72411 00:29:31.523 Removing: /var/run/dpdk/spdk_pid72545 00:29:31.523 Removing: /var/run/dpdk/spdk_pid72748 00:29:31.523 Removing: /var/run/dpdk/spdk_pid72855 00:29:31.523 Removing: /var/run/dpdk/spdk_pid73198 00:29:31.523 Removing: /var/run/dpdk/spdk_pid73460 00:29:31.523 Removing: /var/run/dpdk/spdk_pid73814 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74029 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74170 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74240 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74385 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74421 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74498 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74698 00:29:31.523 Removing: /var/run/dpdk/spdk_pid74958 00:29:31.523 Removing: /var/run/dpdk/spdk_pid75381 00:29:31.523 Removing: /var/run/dpdk/spdk_pid75827 00:29:31.523 Removing: /var/run/dpdk/spdk_pid76255 00:29:31.523 Removing: /var/run/dpdk/spdk_pid76762 00:29:31.523 Removing: /var/run/dpdk/spdk_pid76906 00:29:31.523 Removing: /var/run/dpdk/spdk_pid77020 00:29:31.523 Removing: /var/run/dpdk/spdk_pid77713 00:29:31.523 Removing: /var/run/dpdk/spdk_pid77799 00:29:31.523 Removing: /var/run/dpdk/spdk_pid78275 00:29:31.523 Removing: /var/run/dpdk/spdk_pid78702 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79231 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79368 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79433 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79508 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79571 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79646 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79873 00:29:31.523 Removing: /var/run/dpdk/spdk_pid79925 00:29:31.523 Removing: /var/run/dpdk/spdk_pid80005 00:29:31.523 Removing: /var/run/dpdk/spdk_pid80079 00:29:31.523 Removing: /var/run/dpdk/spdk_pid80122 00:29:31.523 Removing: /var/run/dpdk/spdk_pid80197 00:29:31.523 Removing: /var/run/dpdk/spdk_pid80332 00:29:31.523 Clean 00:29:31.523 killing process with pid 48462 00:29:31.523 killing process with pid 48463 00:29:31.523 12:50:40 -- common/autotest_common.sh@1436 -- # return 0 00:29:31.523 12:50:40 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:31.523 12:50:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:31.523 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:29:31.523 12:50:40 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:31.523 12:50:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:31.523 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:29:31.843 12:50:40 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:31.843 12:50:40 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:31.843 12:50:40 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:31.843 12:50:40 -- spdk/autotest.sh@394 -- # hash lcov 00:29:31.844 12:50:40 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:31.844 12:50:40 -- spdk/autotest.sh@396 -- # hostname 00:29:31.844 12:50:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:31.844 geninfo: WARNING: invalid characters removed from testname! 00:29:58.378 12:51:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:01.661 12:51:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:04.191 12:51:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:06.718 12:51:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:10.006 12:51:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:12.537 12:51:21 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:15.064 12:51:23 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:15.064 12:51:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.064 12:51:23 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:15.064 12:51:23 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.064 12:51:23 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.064 12:51:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.064 12:51:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.064 12:51:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.064 12:51:23 -- paths/export.sh@5 -- $ export PATH 00:30:15.064 12:51:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.064 12:51:23 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:15.064 12:51:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:15.064 12:51:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715777483.XXXXXX 00:30:15.064 12:51:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715777483.vMyGZH 00:30:15.064 12:51:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:15.064 12:51:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:15.064 12:51:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:15.064 12:51:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:15.064 12:51:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:15.064 12:51:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:15.064 12:51:23 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:15.064 12:51:23 -- common/autotest_common.sh@10 -- $ set +x 00:30:15.064 12:51:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:30:15.064 12:51:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:15.064 12:51:23 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:15.064 12:51:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:15.064 12:51:23 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:15.064 12:51:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:15.064 12:51:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:15.064 12:51:23 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:15.064 12:51:23 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:15.064 12:51:23 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:15.064 12:51:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:15.064 + [[ -n 5281 ]] 00:30:15.064 + sudo kill 5281 00:30:15.329 [Pipeline] } 00:30:15.346 [Pipeline] // timeout 00:30:15.350 [Pipeline] } 00:30:15.366 [Pipeline] // stage 00:30:15.371 [Pipeline] } 00:30:15.388 [Pipeline] // catchError 00:30:15.398 [Pipeline] stage 00:30:15.400 [Pipeline] { (Stop VM) 00:30:15.411 [Pipeline] sh 00:30:15.684 + vagrant halt 00:30:19.868 ==> default: Halting domain... 00:30:26.482 [Pipeline] sh 00:30:26.789 + vagrant destroy -f 00:30:30.974 ==> default: Removing domain... 00:30:30.985 [Pipeline] sh 00:30:31.265 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:31.275 [Pipeline] } 00:30:31.295 [Pipeline] // stage 00:30:31.302 [Pipeline] } 00:30:31.321 [Pipeline] // dir 00:30:31.327 [Pipeline] } 00:30:31.346 [Pipeline] // wrap 00:30:31.354 [Pipeline] } 00:30:31.372 [Pipeline] // catchError 00:30:31.382 [Pipeline] stage 00:30:31.384 [Pipeline] { (Epilogue) 00:30:31.399 [Pipeline] sh 00:30:31.709 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:38.288 [Pipeline] catchError 00:30:38.290 [Pipeline] { 00:30:38.307 [Pipeline] sh 00:30:38.590 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:38.590 Artifacts sizes are good 00:30:38.599 [Pipeline] } 00:30:38.616 [Pipeline] // catchError 00:30:38.628 [Pipeline] archiveArtifacts 00:30:38.635 Archiving artifacts 00:30:38.811 [Pipeline] cleanWs 00:30:38.831 [WS-CLEANUP] Deleting project workspace... 00:30:38.831 [WS-CLEANUP] Deferred wipeout is used... 00:30:38.848 [WS-CLEANUP] done 00:30:38.872 [Pipeline] } 00:30:38.883 [Pipeline] // stage 00:30:38.886 [Pipeline] } 00:30:38.895 [Pipeline] // node 00:30:38.898 [Pipeline] End of Pipeline 00:30:38.920 Finished: SUCCESS