00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2384 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3645 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.223 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.223 The recommended git tool is: git 00:00:00.224 using credential 00000000-0000-0000-0000-000000000002 00:00:00.225 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.249 Fetching changes from the remote Git repository 00:00:00.251 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.274 Using shallow fetch with depth 1 00:00:00.274 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.274 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.244 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.255 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.266 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.266 > git config core.sparsecheckout # timeout=10 00:00:07.276 > git read-tree -mu HEAD # timeout=10 00:00:07.291 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.308 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.308 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.412 [Pipeline] Start of Pipeline 00:00:07.423 [Pipeline] library 00:00:07.424 Loading library shm_lib@master 00:00:07.425 Library shm_lib@master is cached. Copying from home. 00:00:07.437 [Pipeline] node 00:00:07.448 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.449 [Pipeline] { 00:00:07.458 [Pipeline] catchError 00:00:07.459 [Pipeline] { 00:00:07.469 [Pipeline] wrap 00:00:07.474 [Pipeline] { 00:00:07.481 [Pipeline] stage 00:00:07.483 [Pipeline] { (Prologue) 00:00:07.500 [Pipeline] echo 00:00:07.501 Node: VM-host-SM38 00:00:07.507 [Pipeline] cleanWs 00:00:07.516 [WS-CLEANUP] Deleting project workspace... 00:00:07.517 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.522 [WS-CLEANUP] done 00:00:07.735 [Pipeline] setCustomBuildProperty 00:00:07.814 [Pipeline] httpRequest 00:00:08.387 [Pipeline] echo 00:00:08.389 Sorcerer 10.211.164.20 is alive 00:00:08.401 [Pipeline] retry 00:00:08.403 [Pipeline] { 00:00:08.419 [Pipeline] httpRequest 00:00:08.425 HttpMethod: GET 00:00:08.426 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.426 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.438 Response Code: HTTP/1.1 200 OK 00:00:08.439 Success: Status code 200 is in the accepted range: 200,404 00:00:08.440 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.740 [Pipeline] } 00:00:12.759 [Pipeline] // retry 00:00:12.768 [Pipeline] sh 00:00:13.056 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.073 [Pipeline] httpRequest 00:00:13.446 [Pipeline] echo 00:00:13.448 Sorcerer 10.211.164.20 is alive 00:00:13.458 [Pipeline] retry 00:00:13.460 [Pipeline] { 00:00:13.474 [Pipeline] httpRequest 00:00:13.479 HttpMethod: GET 00:00:13.480 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.481 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.495 Response Code: HTTP/1.1 200 OK 00:00:13.496 Success: Status code 200 is in the accepted range: 200,404 00:00:13.497 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.451 [Pipeline] } 00:01:13.469 [Pipeline] // retry 00:01:13.477 [Pipeline] sh 00:01:13.764 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:16.315 [Pipeline] sh 00:01:16.657 + git -C spdk log --oneline -n5 00:01:16.657 c13c99a5e test: Various fixes for Fedora40 00:01:16.657 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:16.657 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:16.657 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:16.657 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:16.679 [Pipeline] writeFile 00:01:16.694 [Pipeline] sh 00:01:16.981 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.994 [Pipeline] sh 00:01:17.280 + cat autorun-spdk.conf 00:01:17.280 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.280 SPDK_TEST_NVME=1 00:01:17.280 SPDK_TEST_FTL=1 00:01:17.280 SPDK_TEST_ISAL=1 00:01:17.280 SPDK_RUN_ASAN=1 00:01:17.280 SPDK_RUN_UBSAN=1 00:01:17.280 SPDK_TEST_XNVME=1 00:01:17.280 SPDK_TEST_NVME_FDP=1 00:01:17.280 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.287 RUN_NIGHTLY=1 00:01:17.289 [Pipeline] } 00:01:17.301 [Pipeline] // stage 00:01:17.315 [Pipeline] stage 00:01:17.317 [Pipeline] { (Run VM) 00:01:17.330 [Pipeline] sh 00:01:17.615 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.615 + echo 'Start stage prepare_nvme.sh' 00:01:17.615 Start stage prepare_nvme.sh 00:01:17.615 + [[ -n 10 ]] 00:01:17.615 + disk_prefix=ex10 00:01:17.615 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:17.615 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:17.615 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:17.615 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.615 ++ SPDK_TEST_NVME=1 00:01:17.615 ++ SPDK_TEST_FTL=1 00:01:17.615 ++ SPDK_TEST_ISAL=1 00:01:17.615 ++ SPDK_RUN_ASAN=1 00:01:17.615 ++ SPDK_RUN_UBSAN=1 00:01:17.615 ++ SPDK_TEST_XNVME=1 00:01:17.615 ++ SPDK_TEST_NVME_FDP=1 00:01:17.615 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.615 ++ RUN_NIGHTLY=1 00:01:17.615 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:17.615 + nvme_files=() 00:01:17.615 + declare -A nvme_files 00:01:17.615 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.615 + nvme_files['nvme.img']=5G 00:01:17.615 + nvme_files['nvme-cmb.img']=5G 00:01:17.615 + nvme_files['nvme-multi0.img']=4G 00:01:17.615 + nvme_files['nvme-multi1.img']=4G 00:01:17.615 + nvme_files['nvme-multi2.img']=4G 00:01:17.615 + nvme_files['nvme-openstack.img']=8G 00:01:17.615 + nvme_files['nvme-zns.img']=5G 00:01:17.615 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.615 + (( SPDK_TEST_FTL == 1 )) 00:01:17.615 + nvme_files["nvme-ftl.img"]=6G 00:01:17.615 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.615 + nvme_files["nvme-fdp.img"]=1G 00:01:17.615 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.615 + for nvme in "${!nvme_files[@]}" 00:01:17.615 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:17.876 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.876 + for nvme in "${!nvme_files[@]}" 00:01:17.876 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:18.446 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:18.446 + for nvme in "${!nvme_files[@]}" 00:01:18.446 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:18.446 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.446 + for nvme in "${!nvme_files[@]}" 00:01:18.446 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:18.707 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.707 + for nvme in "${!nvme_files[@]}" 00:01:18.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:18.707 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.707 + for nvme in "${!nvme_files[@]}" 00:01:18.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:18.968 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.968 + for nvme in "${!nvme_files[@]}" 00:01:18.968 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:19.228 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.228 + for nvme in "${!nvme_files[@]}" 00:01:19.228 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:19.486 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:19.486 + for nvme in "${!nvme_files[@]}" 00:01:19.486 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:19.745 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.004 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:20.004 + echo 'End stage prepare_nvme.sh' 00:01:20.004 End stage prepare_nvme.sh 00:01:20.014 [Pipeline] sh 00:01:20.296 + DISTRO=fedora39 00:01:20.296 + CPUS=10 00:01:20.296 + RAM=12288 00:01:20.296 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.296 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:20.296 00:01:20.296 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:20.296 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:20.296 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:20.296 HELP=0 00:01:20.296 DRY_RUN=0 00:01:20.296 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:20.296 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:20.296 NVME_AUTO_CREATE=0 00:01:20.296 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:20.296 NVME_CMB=,,,, 00:01:20.296 NVME_PMR=,,,, 00:01:20.296 NVME_ZNS=,,,, 00:01:20.296 NVME_MS=true,,,, 00:01:20.296 NVME_FDP=,,,on, 00:01:20.296 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.296 SPDK_VAGRANT_VMCPU=10 00:01:20.296 SPDK_VAGRANT_VMRAM=12288 00:01:20.296 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.296 SPDK_VAGRANT_HTTP_PROXY= 00:01:20.296 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.296 SPDK_OPENSTACK_NETWORK=0 00:01:20.296 VAGRANT_PACKAGE_BOX=0 00:01:20.296 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.296 FORCE_DISTRO=true 00:01:20.296 VAGRANT_BOX_VERSION= 00:01:20.296 EXTRA_VAGRANTFILES= 00:01:20.296 NIC_MODEL=e1000 00:01:20.296 00:01:20.296 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:20.296 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:22.842 Bringing machine 'default' up with 'libvirt' provider... 00:01:23.103 ==> default: Creating image (snapshot of base box volume). 00:01:23.365 ==> default: Creating domain with the following settings... 00:01:23.365 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732000592_d7919bc56f387af91ae1 00:01:23.365 ==> default: -- Domain type: kvm 00:01:23.365 ==> default: -- Cpus: 10 00:01:23.365 ==> default: -- Feature: acpi 00:01:23.365 ==> default: -- Feature: apic 00:01:23.365 ==> default: -- Feature: pae 00:01:23.365 ==> default: -- Memory: 12288M 00:01:23.365 ==> default: -- Memory Backing: hugepages: 00:01:23.365 ==> default: -- Management MAC: 00:01:23.365 ==> default: -- Loader: 00:01:23.365 ==> default: -- Nvram: 00:01:23.365 ==> default: -- Base box: spdk/fedora39 00:01:23.365 ==> default: -- Storage pool: default 00:01:23.365 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732000592_d7919bc56f387af91ae1.img (20G) 00:01:23.365 ==> default: -- Volume Cache: default 00:01:23.365 ==> default: -- Kernel: 00:01:23.365 ==> default: -- Initrd: 00:01:23.365 ==> default: -- Graphics Type: vnc 00:01:23.365 ==> default: -- Graphics Port: -1 00:01:23.365 ==> default: -- Graphics IP: 127.0.0.1 00:01:23.365 ==> default: -- Graphics Password: Not defined 00:01:23.365 ==> default: -- Video Type: cirrus 00:01:23.365 ==> default: -- Video VRAM: 9216 00:01:23.365 ==> default: -- Sound Type: 00:01:23.365 ==> default: -- Keymap: en-us 00:01:23.365 ==> default: -- TPM Path: 00:01:23.365 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:23.365 ==> default: -- Command line args: 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:23.365 ==> default: -> value=-drive, 00:01:23.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:23.365 ==> default: -> value=-device, 00:01:23.365 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.655 ==> default: Creating shared folders metadata... 00:01:23.655 ==> default: Starting domain. 00:01:24.637 ==> default: Waiting for domain to get an IP address... 00:01:46.611 ==> default: Waiting for SSH to become available... 00:01:47.554 ==> default: Configuring and enabling network interfaces... 00:01:51.817 default: SSH address: 192.168.121.71:22 00:01:51.817 default: SSH username: vagrant 00:01:51.817 default: SSH auth method: private key 00:01:53.727 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.906 ==> default: Mounting SSHFS shared folder... 00:02:03.816 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:03.816 ==> default: Checking Mount.. 00:02:04.759 ==> default: Folder Successfully Mounted! 00:02:05.020 00:02:05.020 SUCCESS! 00:02:05.020 00:02:05.020 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.020 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.020 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.020 00:02:05.032 [Pipeline] } 00:02:05.046 [Pipeline] // stage 00:02:05.057 [Pipeline] dir 00:02:05.057 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.059 [Pipeline] { 00:02:05.073 [Pipeline] catchError 00:02:05.075 [Pipeline] { 00:02:05.088 [Pipeline] sh 00:02:05.373 + vagrant ssh-config --host vagrant 00:02:05.373 + sed -ne '/^Host/,$p' 00:02:05.373 + tee ssh_conf 00:02:07.911 Host vagrant 00:02:07.911 HostName 192.168.121.71 00:02:07.911 User vagrant 00:02:07.911 Port 22 00:02:07.911 UserKnownHostsFile /dev/null 00:02:07.911 StrictHostKeyChecking no 00:02:07.911 PasswordAuthentication no 00:02:07.911 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.911 IdentitiesOnly yes 00:02:07.911 LogLevel FATAL 00:02:07.911 ForwardAgent yes 00:02:07.911 ForwardX11 yes 00:02:07.911 00:02:07.926 [Pipeline] withEnv 00:02:07.928 [Pipeline] { 00:02:07.940 [Pipeline] sh 00:02:08.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:08.224 source /etc/os-release 00:02:08.224 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.224 # Minimal, systemd-like check. 00:02:08.224 if [[ -e /.dockerenv ]]; then 00:02:08.224 # Clear garbage from the node'\''s name: 00:02:08.224 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.224 # $HOSTNAME is the actual container id 00:02:08.224 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.224 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.224 # We can assume this is a mount from a host where container is running, 00:02:08.224 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.224 container="$(< /etc/hostname) ($agent)" 00:02:08.224 else 00:02:08.224 # Fallback 00:02:08.224 container=$agent 00:02:08.224 fi 00:02:08.224 fi 00:02:08.224 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.224 ' 00:02:08.498 [Pipeline] } 00:02:08.514 [Pipeline] // withEnv 00:02:08.522 [Pipeline] setCustomBuildProperty 00:02:08.538 [Pipeline] stage 00:02:08.541 [Pipeline] { (Tests) 00:02:08.558 [Pipeline] sh 00:02:08.842 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.119 [Pipeline] sh 00:02:09.403 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.679 [Pipeline] timeout 00:02:09.679 Timeout set to expire in 50 min 00:02:09.681 [Pipeline] { 00:02:09.695 [Pipeline] sh 00:02:09.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:10.547 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:10.560 [Pipeline] sh 00:02:10.845 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:11.119 [Pipeline] sh 00:02:11.402 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.679 [Pipeline] sh 00:02:11.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:12.218 ++ readlink -f spdk_repo 00:02:12.218 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.218 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.218 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.218 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.218 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.218 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.218 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.218 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:12.218 + cd /home/vagrant/spdk_repo 00:02:12.218 + source /etc/os-release 00:02:12.218 ++ NAME='Fedora Linux' 00:02:12.218 ++ VERSION='39 (Cloud Edition)' 00:02:12.218 ++ ID=fedora 00:02:12.218 ++ VERSION_ID=39 00:02:12.218 ++ VERSION_CODENAME= 00:02:12.218 ++ PLATFORM_ID=platform:f39 00:02:12.218 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.218 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.218 ++ LOGO=fedora-logo-icon 00:02:12.218 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.218 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.218 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.218 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.218 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.218 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.218 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.218 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.218 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.218 ++ SUPPORT_END=2024-11-12 00:02:12.218 ++ VARIANT='Cloud Edition' 00:02:12.218 ++ VARIANT_ID=cloud 00:02:12.218 + uname -a 00:02:12.218 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.218 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.218 Hugepages 00:02:12.218 node hugesize free / total 00:02:12.218 node0 1048576kB 0 / 0 00:02:12.218 node0 2048kB 0 / 0 00:02:12.218 00:02:12.218 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.218 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.218 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.218 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:12.218 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:12.480 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:12.480 + rm -f /tmp/spdk-ld-path 00:02:12.480 + source autorun-spdk.conf 00:02:12.480 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.480 ++ SPDK_TEST_NVME=1 00:02:12.480 ++ SPDK_TEST_FTL=1 00:02:12.480 ++ SPDK_TEST_ISAL=1 00:02:12.480 ++ SPDK_RUN_ASAN=1 00:02:12.480 ++ SPDK_RUN_UBSAN=1 00:02:12.480 ++ SPDK_TEST_XNVME=1 00:02:12.480 ++ SPDK_TEST_NVME_FDP=1 00:02:12.480 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.480 ++ RUN_NIGHTLY=1 00:02:12.480 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.480 + [[ -n '' ]] 00:02:12.480 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.480 + for M in /var/spdk/build-*-manifest.txt 00:02:12.480 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.480 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.480 + for M in /var/spdk/build-*-manifest.txt 00:02:12.480 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.480 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.480 + for M in /var/spdk/build-*-manifest.txt 00:02:12.480 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.480 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.480 ++ uname 00:02:12.480 + [[ Linux == \L\i\n\u\x ]] 00:02:12.480 + sudo dmesg -T 00:02:12.480 + sudo dmesg --clear 00:02:12.480 + dmesg_pid=4995 00:02:12.480 + [[ Fedora Linux == FreeBSD ]] 00:02:12.480 + sudo dmesg -Tw 00:02:12.480 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.480 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.480 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.480 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.480 + export FIO_BIN=/usr/src/fio-static/fio 00:02:12.480 + FIO_BIN=/usr/src/fio-static/fio 00:02:12.480 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.480 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.480 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.480 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.480 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.480 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.480 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.480 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.480 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:12.480 Test configuration: 00:02:12.480 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.480 SPDK_TEST_NVME=1 00:02:12.480 SPDK_TEST_FTL=1 00:02:12.480 SPDK_TEST_ISAL=1 00:02:12.480 SPDK_RUN_ASAN=1 00:02:12.480 SPDK_RUN_UBSAN=1 00:02:12.480 SPDK_TEST_XNVME=1 00:02:12.480 SPDK_TEST_NVME_FDP=1 00:02:12.480 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.480 RUN_NIGHTLY=1 07:17:21 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:12.480 07:17:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:12.480 07:17:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.480 07:17:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.480 07:17:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.480 07:17:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.480 07:17:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.480 07:17:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.480 07:17:21 -- paths/export.sh@5 -- $ export PATH 00:02:12.480 07:17:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.480 07:17:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:12.480 07:17:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:12.480 07:17:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732000641.XXXXXX 00:02:12.480 07:17:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732000641.7mPd71 00:02:12.480 07:17:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:12.480 07:17:21 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:12.480 07:17:21 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:12.480 07:17:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:12.480 07:17:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.480 07:17:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:12.480 07:17:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:12.480 07:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.480 07:17:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:12.480 07:17:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.480 07:17:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.480 07:17:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.481 07:17:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.481 Tue Nov 19 07:17:21 AM UTC 2024 00:02:12.481 07:17:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.742 LTS-67-gc13c99a5e 00:02:12.742 07:17:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:12.742 07:17:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:12.742 07:17:21 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:12.742 07:17:21 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.742 07:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.742 ************************************ 00:02:12.742 START TEST asan 00:02:12.742 ************************************ 00:02:12.742 using asan 00:02:12.742 07:17:21 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:12.742 ************************************ 00:02:12.742 END TEST asan 00:02:12.742 ************************************ 00:02:12.742 00:02:12.742 real 0m0.000s 00:02:12.742 user 0m0.000s 00:02:12.742 sys 0m0.000s 00:02:12.742 07:17:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:12.742 07:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.742 07:17:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.742 07:17:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.742 07:17:21 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:12.742 07:17:21 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.742 07:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.742 ************************************ 00:02:12.742 START TEST ubsan 00:02:12.742 ************************************ 00:02:12.742 using ubsan 00:02:12.742 07:17:21 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:12.742 00:02:12.742 real 0m0.000s 00:02:12.742 user 0m0.000s 00:02:12.742 sys 0m0.000s 00:02:12.742 ************************************ 00:02:12.742 END TEST ubsan 00:02:12.742 ************************************ 00:02:12.742 07:17:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:12.742 07:17:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.742 07:17:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:12.742 07:17:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.742 07:17:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:12.742 07:17:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:12.742 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:12.742 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.314 Using 'verbs' RDMA provider 00:02:26.149 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:36.148 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:36.148 Creating mk/config.mk...done. 00:02:36.148 Creating mk/cc.flags.mk...done. 00:02:36.148 Type 'make' to build. 00:02:36.148 07:17:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:36.148 07:17:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:36.148 07:17:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:36.148 07:17:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.148 ************************************ 00:02:36.148 START TEST make 00:02:36.148 ************************************ 00:02:36.148 07:17:44 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:36.148 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:36.148 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:36.148 meson setup builddir \ 00:02:36.148 -Dwith-libaio=enabled \ 00:02:36.148 -Dwith-liburing=enabled \ 00:02:36.148 -Dwith-libvfn=disabled \ 00:02:36.148 -Dwith-spdk=false && \ 00:02:36.148 meson compile -C builddir && \ 00:02:36.148 cd -) 00:02:36.148 make[1]: Nothing to be done for 'all'. 00:02:37.524 The Meson build system 00:02:37.524 Version: 1.5.0 00:02:37.524 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:37.524 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.524 Build type: native build 00:02:37.524 Project name: xnvme 00:02:37.524 Project version: 0.7.3 00:02:37.524 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.524 C linker for the host machine: cc ld.bfd 2.40-14 00:02:37.524 Host machine cpu family: x86_64 00:02:37.524 Host machine cpu: x86_64 00:02:37.524 Message: host_machine.system: linux 00:02:37.524 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:37.524 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:37.524 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:37.524 Run-time dependency threads found: YES 00:02:37.524 Has header "setupapi.h" : NO 00:02:37.524 Has header "linux/blkzoned.h" : YES 00:02:37.524 Has header "linux/blkzoned.h" : YES (cached) 00:02:37.524 Has header "libaio.h" : YES 00:02:37.524 Library aio found: YES 00:02:37.524 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.524 Run-time dependency liburing found: YES 2.2 00:02:37.524 Dependency libvfn skipped: feature with-libvfn disabled 00:02:37.524 Run-time dependency appleframeworks found: NO (tried framework) 00:02:37.524 Run-time dependency appleframeworks found: NO (tried framework) 00:02:37.524 Configuring xnvme_config.h using configuration 00:02:37.524 Configuring xnvme.spec using configuration 00:02:37.524 Run-time dependency bash-completion found: YES 2.11 00:02:37.524 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:37.524 Program cp found: YES (/usr/bin/cp) 00:02:37.524 Has header "winsock2.h" : NO 00:02:37.524 Has header "dbghelp.h" : NO 00:02:37.524 Library rpcrt4 found: NO 00:02:37.524 Library rt found: YES 00:02:37.524 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:37.524 Found CMake: /usr/bin/cmake (3.27.7) 00:02:37.524 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:37.524 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:37.524 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:37.524 Build targets in project: 32 00:02:37.524 00:02:37.524 xnvme 0.7.3 00:02:37.524 00:02:37.524 User defined options 00:02:37.524 with-libaio : enabled 00:02:37.524 with-liburing: enabled 00:02:37.524 with-libvfn : disabled 00:02:37.524 with-spdk : false 00:02:37.524 00:02:37.524 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.783 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:37.783 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:37.783 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:37.783 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:37.783 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:37.783 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:37.783 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:37.783 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:37.783 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:37.783 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:37.783 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:38.041 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:38.041 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:38.041 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:38.042 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:38.042 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:38.042 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:38.042 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:38.042 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:38.042 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:38.042 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:38.042 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:38.042 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.042 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:38.042 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:38.042 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:38.042 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:38.042 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:38.042 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:38.042 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:38.042 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:38.042 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:38.042 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:38.042 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:38.042 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:38.042 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:38.042 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:38.042 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:38.042 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:38.042 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:38.042 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:38.042 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:38.042 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:38.042 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:38.042 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:38.042 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:38.042 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:38.042 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:38.042 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:38.042 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:38.042 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:38.042 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:38.301 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:38.301 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:38.301 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:38.301 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:38.301 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:38.301 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:38.301 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:38.301 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:38.301 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:38.301 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:38.301 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:38.301 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:38.301 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:38.301 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:38.301 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:38.301 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:38.301 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:38.301 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.301 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:38.301 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:38.301 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:38.301 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:38.301 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:38.301 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.562 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:38.562 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.562 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:38.562 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:38.562 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:38.562 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.562 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:38.562 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:38.562 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:38.562 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:38.562 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:38.562 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:38.562 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:38.562 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:38.562 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:38.562 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:38.562 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:38.562 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:38.562 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:38.562 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:38.562 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:38.562 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:38.562 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:38.562 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:38.562 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:38.562 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:38.562 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:38.562 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:38.822 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:38.822 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:38.822 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:38.822 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:38.822 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:38.822 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:38.822 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:38.822 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:38.822 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:38.822 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:38.822 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:38.822 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:38.822 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:38.822 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:38.822 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:38.822 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:38.822 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:38.822 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:38.822 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:38.822 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:38.822 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:38.822 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:38.822 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:38.822 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:38.822 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:38.822 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:38.822 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:38.822 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:38.822 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:38.822 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:38.822 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:38.822 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:38.822 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:38.822 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:39.081 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:39.081 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:39.081 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:39.081 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:39.081 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:39.081 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:39.081 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:39.081 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:39.081 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:39.081 [147/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:39.081 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:39.081 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:39.081 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:39.081 [151/203] Linking target lib/libxnvme.so 00:02:39.081 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:39.081 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:39.081 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:39.081 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:39.081 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:39.081 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:39.341 [158/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:39.341 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:39.341 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:39.341 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:39.341 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:39.341 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:39.341 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:39.341 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:39.341 [166/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:39.341 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:39.341 [168/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:39.341 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:39.341 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:39.341 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:39.599 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:39.599 [173/203] Linking static target lib/libxnvme.a 00:02:39.599 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:39.599 [175/203] Linking target tests/xnvme_tests_buf 00:02:39.599 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:39.599 [177/203] Linking target tests/xnvme_tests_ioworker 00:02:39.599 [178/203] Linking target tests/xnvme_tests_enum 00:02:39.599 [179/203] Linking target tests/xnvme_tests_cli 00:02:39.599 [180/203] Linking target tests/xnvme_tests_lblk 00:02:39.599 [181/203] Linking target tests/xnvme_tests_scc 00:02:39.599 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:02:39.599 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:39.599 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:39.599 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:39.599 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:39.599 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:39.599 [188/203] Linking target tests/xnvme_tests_kvs 00:02:39.600 [189/203] Linking target tools/xdd 00:02:39.600 [190/203] Linking target tests/xnvme_tests_map 00:02:39.600 [191/203] Linking target tools/lblk 00:02:39.600 [192/203] Linking target tools/xnvme 00:02:39.600 [193/203] Linking target tools/xnvme_file 00:02:39.600 [194/203] Linking target tools/zoned 00:02:39.600 [195/203] Linking target tools/kvs 00:02:39.600 [196/203] Linking target examples/xnvme_hello 00:02:39.600 [197/203] Linking target examples/xnvme_enum 00:02:39.600 [198/203] Linking target examples/xnvme_dev 00:02:39.600 [199/203] Linking target examples/xnvme_single_async 00:02:39.600 [200/203] Linking target examples/xnvme_io_async 00:02:39.600 [201/203] Linking target examples/xnvme_single_sync 00:02:39.600 [202/203] Linking target examples/zoned_io_sync 00:02:39.600 [203/203] Linking target examples/zoned_io_async 00:02:39.600 INFO: autodetecting backend as ninja 00:02:39.600 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:39.600 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:43.778 The Meson build system 00:02:43.778 Version: 1.5.0 00:02:43.778 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:43.778 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:43.778 Build type: native build 00:02:43.778 Program cat found: YES (/usr/bin/cat) 00:02:43.778 Project name: DPDK 00:02:43.778 Project version: 23.11.0 00:02:43.778 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.778 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.778 Host machine cpu family: x86_64 00:02:43.778 Host machine cpu: x86_64 00:02:43.778 Message: ## Building in Developer Mode ## 00:02:43.778 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.778 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.778 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.778 Program python3 found: YES (/usr/bin/python3) 00:02:43.778 Program cat found: YES (/usr/bin/cat) 00:02:43.778 Compiler for C supports arguments -march=native: YES 00:02:43.778 Checking for size of "void *" : 8 00:02:43.778 Checking for size of "void *" : 8 (cached) 00:02:43.778 Library m found: YES 00:02:43.778 Library numa found: YES 00:02:43.778 Has header "numaif.h" : YES 00:02:43.778 Library fdt found: NO 00:02:43.778 Library execinfo found: NO 00:02:43.778 Has header "execinfo.h" : YES 00:02:43.778 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.778 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.778 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.778 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.778 Run-time dependency openssl found: YES 3.1.1 00:02:43.778 Run-time dependency libpcap found: YES 1.10.4 00:02:43.778 Has header "pcap.h" with dependency libpcap: YES 00:02:43.779 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.779 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.779 Compiler for C supports arguments -Wformat: YES 00:02:43.779 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.779 Compiler for C supports arguments -Wformat-security: NO 00:02:43.779 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.779 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.779 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.779 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.779 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.779 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.779 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.779 Compiler for C supports arguments -Wundef: YES 00:02:43.779 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.779 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.779 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.779 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.779 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.779 Program objdump found: YES (/usr/bin/objdump) 00:02:43.779 Compiler for C supports arguments -mavx512f: YES 00:02:43.779 Checking if "AVX512 checking" compiles: YES 00:02:43.779 Fetching value of define "__SSE4_2__" : 1 00:02:43.779 Fetching value of define "__AES__" : 1 00:02:43.779 Fetching value of define "__AVX__" : 1 00:02:43.779 Fetching value of define "__AVX2__" : 1 00:02:43.779 Fetching value of define "__AVX512BW__" : 1 00:02:43.779 Fetching value of define "__AVX512CD__" : 1 00:02:43.779 Fetching value of define "__AVX512DQ__" : 1 00:02:43.779 Fetching value of define "__AVX512F__" : 1 00:02:43.779 Fetching value of define "__AVX512VL__" : 1 00:02:43.779 Fetching value of define "__PCLMUL__" : 1 00:02:43.779 Fetching value of define "__RDRND__" : 1 00:02:43.779 Fetching value of define "__RDSEED__" : 1 00:02:43.779 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:43.779 Fetching value of define "__znver1__" : (undefined) 00:02:43.779 Fetching value of define "__znver2__" : (undefined) 00:02:43.779 Fetching value of define "__znver3__" : (undefined) 00:02:43.779 Fetching value of define "__znver4__" : (undefined) 00:02:43.779 Library asan found: YES 00:02:43.779 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.779 Message: lib/log: Defining dependency "log" 00:02:43.779 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.779 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.779 Library rt found: YES 00:02:43.779 Checking for function "getentropy" : NO 00:02:43.779 Message: lib/eal: Defining dependency "eal" 00:02:43.779 Message: lib/ring: Defining dependency "ring" 00:02:43.779 Message: lib/rcu: Defining dependency "rcu" 00:02:43.779 Message: lib/mempool: Defining dependency "mempool" 00:02:43.779 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.779 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.779 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.779 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.779 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.779 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.779 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:43.779 Compiler for C supports arguments -mpclmul: YES 00:02:43.779 Compiler for C supports arguments -maes: YES 00:02:43.779 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.779 Compiler for C supports arguments -mavx512bw: YES 00:02:43.779 Compiler for C supports arguments -mavx512dq: YES 00:02:43.779 Compiler for C supports arguments -mavx512vl: YES 00:02:43.779 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.779 Compiler for C supports arguments -mavx2: YES 00:02:43.779 Compiler for C supports arguments -mavx: YES 00:02:43.779 Message: lib/net: Defining dependency "net" 00:02:43.779 Message: lib/meter: Defining dependency "meter" 00:02:43.779 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.779 Message: lib/pci: Defining dependency "pci" 00:02:43.779 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.779 Message: lib/hash: Defining dependency "hash" 00:02:43.779 Message: lib/timer: Defining dependency "timer" 00:02:43.779 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.779 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.779 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.779 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.779 Message: lib/power: Defining dependency "power" 00:02:43.779 Message: lib/reorder: Defining dependency "reorder" 00:02:43.779 Message: lib/security: Defining dependency "security" 00:02:43.779 Has header "linux/userfaultfd.h" : YES 00:02:43.779 Has header "linux/vduse.h" : YES 00:02:43.779 Message: lib/vhost: Defining dependency "vhost" 00:02:43.779 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.779 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.779 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.779 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.779 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.779 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.779 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.779 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.779 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.779 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.779 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.779 Configuring doxy-api-html.conf using configuration 00:02:43.779 Configuring doxy-api-man.conf using configuration 00:02:43.779 Program mandb found: YES (/usr/bin/mandb) 00:02:43.779 Program sphinx-build found: NO 00:02:43.779 Configuring rte_build_config.h using configuration 00:02:43.779 Message: 00:02:43.779 ================= 00:02:43.779 Applications Enabled 00:02:43.779 ================= 00:02:43.779 00:02:43.779 apps: 00:02:43.779 00:02:43.779 00:02:43.779 Message: 00:02:43.779 ================= 00:02:43.779 Libraries Enabled 00:02:43.779 ================= 00:02:43.779 00:02:43.779 libs: 00:02:43.779 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.779 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.779 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.779 00:02:43.779 Message: 00:02:43.779 =============== 00:02:43.779 Drivers Enabled 00:02:43.779 =============== 00:02:43.779 00:02:43.779 common: 00:02:43.779 00:02:43.779 bus: 00:02:43.779 pci, vdev, 00:02:43.779 mempool: 00:02:43.779 ring, 00:02:43.779 dma: 00:02:43.779 00:02:43.779 net: 00:02:43.779 00:02:43.779 crypto: 00:02:43.779 00:02:43.779 compress: 00:02:43.779 00:02:43.779 vdpa: 00:02:43.779 00:02:43.779 00:02:43.779 Message: 00:02:43.779 ================= 00:02:43.779 Content Skipped 00:02:43.779 ================= 00:02:43.779 00:02:43.779 apps: 00:02:43.779 dumpcap: explicitly disabled via build config 00:02:43.779 graph: explicitly disabled via build config 00:02:43.779 pdump: explicitly disabled via build config 00:02:43.779 proc-info: explicitly disabled via build config 00:02:43.779 test-acl: explicitly disabled via build config 00:02:43.779 test-bbdev: explicitly disabled via build config 00:02:43.779 test-cmdline: explicitly disabled via build config 00:02:43.779 test-compress-perf: explicitly disabled via build config 00:02:43.779 test-crypto-perf: explicitly disabled via build config 00:02:43.779 test-dma-perf: explicitly disabled via build config 00:02:43.779 test-eventdev: explicitly disabled via build config 00:02:43.779 test-fib: explicitly disabled via build config 00:02:43.779 test-flow-perf: explicitly disabled via build config 00:02:43.779 test-gpudev: explicitly disabled via build config 00:02:43.779 test-mldev: explicitly disabled via build config 00:02:43.779 test-pipeline: explicitly disabled via build config 00:02:43.779 test-pmd: explicitly disabled via build config 00:02:43.779 test-regex: explicitly disabled via build config 00:02:43.779 test-sad: explicitly disabled via build config 00:02:43.779 test-security-perf: explicitly disabled via build config 00:02:43.779 00:02:43.779 libs: 00:02:43.779 metrics: explicitly disabled via build config 00:02:43.779 acl: explicitly disabled via build config 00:02:43.779 bbdev: explicitly disabled via build config 00:02:43.779 bitratestats: explicitly disabled via build config 00:02:43.779 bpf: explicitly disabled via build config 00:02:43.779 cfgfile: explicitly disabled via build config 00:02:43.779 distributor: explicitly disabled via build config 00:02:43.779 efd: explicitly disabled via build config 00:02:43.779 eventdev: explicitly disabled via build config 00:02:43.779 dispatcher: explicitly disabled via build config 00:02:43.779 gpudev: explicitly disabled via build config 00:02:43.779 gro: explicitly disabled via build config 00:02:43.779 gso: explicitly disabled via build config 00:02:43.779 ip_frag: explicitly disabled via build config 00:02:43.779 jobstats: explicitly disabled via build config 00:02:43.779 latencystats: explicitly disabled via build config 00:02:43.779 lpm: explicitly disabled via build config 00:02:43.779 member: explicitly disabled via build config 00:02:43.779 pcapng: explicitly disabled via build config 00:02:43.779 rawdev: explicitly disabled via build config 00:02:43.779 regexdev: explicitly disabled via build config 00:02:43.779 mldev: explicitly disabled via build config 00:02:43.779 rib: explicitly disabled via build config 00:02:43.779 sched: explicitly disabled via build config 00:02:43.779 stack: explicitly disabled via build config 00:02:43.779 ipsec: explicitly disabled via build config 00:02:43.779 pdcp: explicitly disabled via build config 00:02:43.779 fib: explicitly disabled via build config 00:02:43.779 port: explicitly disabled via build config 00:02:43.779 pdump: explicitly disabled via build config 00:02:43.780 table: explicitly disabled via build config 00:02:43.780 pipeline: explicitly disabled via build config 00:02:43.780 graph: explicitly disabled via build config 00:02:43.780 node: explicitly disabled via build config 00:02:43.780 00:02:43.780 drivers: 00:02:43.780 common/cpt: not in enabled drivers build config 00:02:43.780 common/dpaax: not in enabled drivers build config 00:02:43.780 common/iavf: not in enabled drivers build config 00:02:43.780 common/idpf: not in enabled drivers build config 00:02:43.780 common/mvep: not in enabled drivers build config 00:02:43.780 common/octeontx: not in enabled drivers build config 00:02:43.780 bus/auxiliary: not in enabled drivers build config 00:02:43.780 bus/cdx: not in enabled drivers build config 00:02:43.780 bus/dpaa: not in enabled drivers build config 00:02:43.780 bus/fslmc: not in enabled drivers build config 00:02:43.780 bus/ifpga: not in enabled drivers build config 00:02:43.780 bus/platform: not in enabled drivers build config 00:02:43.780 bus/vmbus: not in enabled drivers build config 00:02:43.780 common/cnxk: not in enabled drivers build config 00:02:43.780 common/mlx5: not in enabled drivers build config 00:02:43.780 common/nfp: not in enabled drivers build config 00:02:43.780 common/qat: not in enabled drivers build config 00:02:43.780 common/sfc_efx: not in enabled drivers build config 00:02:43.780 mempool/bucket: not in enabled drivers build config 00:02:43.780 mempool/cnxk: not in enabled drivers build config 00:02:43.780 mempool/dpaa: not in enabled drivers build config 00:02:43.780 mempool/dpaa2: not in enabled drivers build config 00:02:43.780 mempool/octeontx: not in enabled drivers build config 00:02:43.780 mempool/stack: not in enabled drivers build config 00:02:43.780 dma/cnxk: not in enabled drivers build config 00:02:43.780 dma/dpaa: not in enabled drivers build config 00:02:43.780 dma/dpaa2: not in enabled drivers build config 00:02:43.780 dma/hisilicon: not in enabled drivers build config 00:02:43.780 dma/idxd: not in enabled drivers build config 00:02:43.780 dma/ioat: not in enabled drivers build config 00:02:43.780 dma/skeleton: not in enabled drivers build config 00:02:43.780 net/af_packet: not in enabled drivers build config 00:02:43.780 net/af_xdp: not in enabled drivers build config 00:02:43.780 net/ark: not in enabled drivers build config 00:02:43.780 net/atlantic: not in enabled drivers build config 00:02:43.780 net/avp: not in enabled drivers build config 00:02:43.780 net/axgbe: not in enabled drivers build config 00:02:43.780 net/bnx2x: not in enabled drivers build config 00:02:43.780 net/bnxt: not in enabled drivers build config 00:02:43.780 net/bonding: not in enabled drivers build config 00:02:43.780 net/cnxk: not in enabled drivers build config 00:02:43.780 net/cpfl: not in enabled drivers build config 00:02:43.780 net/cxgbe: not in enabled drivers build config 00:02:43.780 net/dpaa: not in enabled drivers build config 00:02:43.780 net/dpaa2: not in enabled drivers build config 00:02:43.780 net/e1000: not in enabled drivers build config 00:02:43.780 net/ena: not in enabled drivers build config 00:02:43.780 net/enetc: not in enabled drivers build config 00:02:43.780 net/enetfec: not in enabled drivers build config 00:02:43.780 net/enic: not in enabled drivers build config 00:02:43.780 net/failsafe: not in enabled drivers build config 00:02:43.780 net/fm10k: not in enabled drivers build config 00:02:43.780 net/gve: not in enabled drivers build config 00:02:43.780 net/hinic: not in enabled drivers build config 00:02:43.780 net/hns3: not in enabled drivers build config 00:02:43.780 net/i40e: not in enabled drivers build config 00:02:43.780 net/iavf: not in enabled drivers build config 00:02:43.780 net/ice: not in enabled drivers build config 00:02:43.780 net/idpf: not in enabled drivers build config 00:02:43.780 net/igc: not in enabled drivers build config 00:02:43.780 net/ionic: not in enabled drivers build config 00:02:43.780 net/ipn3ke: not in enabled drivers build config 00:02:43.780 net/ixgbe: not in enabled drivers build config 00:02:43.780 net/mana: not in enabled drivers build config 00:02:43.780 net/memif: not in enabled drivers build config 00:02:43.780 net/mlx4: not in enabled drivers build config 00:02:43.780 net/mlx5: not in enabled drivers build config 00:02:43.780 net/mvneta: not in enabled drivers build config 00:02:43.780 net/mvpp2: not in enabled drivers build config 00:02:43.780 net/netvsc: not in enabled drivers build config 00:02:43.780 net/nfb: not in enabled drivers build config 00:02:43.780 net/nfp: not in enabled drivers build config 00:02:43.780 net/ngbe: not in enabled drivers build config 00:02:43.780 net/null: not in enabled drivers build config 00:02:43.780 net/octeontx: not in enabled drivers build config 00:02:43.780 net/octeon_ep: not in enabled drivers build config 00:02:43.780 net/pcap: not in enabled drivers build config 00:02:43.780 net/pfe: not in enabled drivers build config 00:02:43.780 net/qede: not in enabled drivers build config 00:02:43.780 net/ring: not in enabled drivers build config 00:02:43.780 net/sfc: not in enabled drivers build config 00:02:43.780 net/softnic: not in enabled drivers build config 00:02:43.780 net/tap: not in enabled drivers build config 00:02:43.780 net/thunderx: not in enabled drivers build config 00:02:43.780 net/txgbe: not in enabled drivers build config 00:02:43.780 net/vdev_netvsc: not in enabled drivers build config 00:02:43.780 net/vhost: not in enabled drivers build config 00:02:43.780 net/virtio: not in enabled drivers build config 00:02:43.780 net/vmxnet3: not in enabled drivers build config 00:02:43.780 raw/*: missing internal dependency, "rawdev" 00:02:43.780 crypto/armv8: not in enabled drivers build config 00:02:43.780 crypto/bcmfs: not in enabled drivers build config 00:02:43.780 crypto/caam_jr: not in enabled drivers build config 00:02:43.780 crypto/ccp: not in enabled drivers build config 00:02:43.780 crypto/cnxk: not in enabled drivers build config 00:02:43.780 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.780 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.780 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.780 crypto/mlx5: not in enabled drivers build config 00:02:43.780 crypto/mvsam: not in enabled drivers build config 00:02:43.780 crypto/nitrox: not in enabled drivers build config 00:02:43.780 crypto/null: not in enabled drivers build config 00:02:43.780 crypto/octeontx: not in enabled drivers build config 00:02:43.780 crypto/openssl: not in enabled drivers build config 00:02:43.780 crypto/scheduler: not in enabled drivers build config 00:02:43.780 crypto/uadk: not in enabled drivers build config 00:02:43.780 crypto/virtio: not in enabled drivers build config 00:02:43.780 compress/isal: not in enabled drivers build config 00:02:43.780 compress/mlx5: not in enabled drivers build config 00:02:43.780 compress/octeontx: not in enabled drivers build config 00:02:43.780 compress/zlib: not in enabled drivers build config 00:02:43.780 regex/*: missing internal dependency, "regexdev" 00:02:43.780 ml/*: missing internal dependency, "mldev" 00:02:43.780 vdpa/ifc: not in enabled drivers build config 00:02:43.780 vdpa/mlx5: not in enabled drivers build config 00:02:43.780 vdpa/nfp: not in enabled drivers build config 00:02:43.780 vdpa/sfc: not in enabled drivers build config 00:02:43.780 event/*: missing internal dependency, "eventdev" 00:02:43.780 baseband/*: missing internal dependency, "bbdev" 00:02:43.780 gpu/*: missing internal dependency, "gpudev" 00:02:43.780 00:02:43.780 00:02:44.038 Build targets in project: 84 00:02:44.038 00:02:44.038 DPDK 23.11.0 00:02:44.038 00:02:44.038 User defined options 00:02:44.038 buildtype : debug 00:02:44.038 default_library : shared 00:02:44.038 libdir : lib 00:02:44.038 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.038 b_sanitize : address 00:02:44.038 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:44.038 c_link_args : 00:02:44.038 cpu_instruction_set: native 00:02:44.038 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:44.038 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:44.038 enable_docs : false 00:02:44.038 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.038 enable_kmods : false 00:02:44.038 tests : false 00:02:44.038 00:02:44.038 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.604 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:44.604 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.604 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.604 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.604 [4/264] Linking static target lib/librte_kvargs.a 00:02:44.604 [5/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.604 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.604 [7/264] Linking static target lib/librte_log.a 00:02:44.604 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.604 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.604 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.862 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.862 [12/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.862 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.121 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.121 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.121 [16/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.121 [17/264] Linking static target lib/librte_telemetry.a 00:02:45.121 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.121 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.121 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.121 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.121 [22/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.121 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.379 [24/264] Linking target lib/librte_log.so.24.0 00:02:45.379 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.379 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.379 [27/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:45.379 [28/264] Linking target lib/librte_kvargs.so.24.0 00:02:45.637 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.637 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.637 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.637 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.637 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.637 [34/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:45.637 [35/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.637 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.637 [37/264] Linking target lib/librte_telemetry.so.24.0 00:02:45.637 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.637 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.895 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.895 [41/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:45.895 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.895 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.895 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.895 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.895 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.895 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.154 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:46.154 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.154 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.154 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.154 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.413 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.413 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.413 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.413 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.413 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.413 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.413 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.413 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.413 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:46.413 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.672 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.672 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.672 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.672 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.930 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.930 [68/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.930 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.930 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.930 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.930 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.930 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.930 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.930 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.930 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.930 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:47.189 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.189 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.189 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.189 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.448 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.448 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.448 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.448 [85/264] Linking static target lib/librte_ring.a 00:02:47.448 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.448 [87/264] Linking static target lib/librte_eal.a 00:02:47.706 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.706 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.706 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.706 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.706 [92/264] Linking static target lib/librte_mempool.a 00:02:47.706 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.706 [94/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.706 [95/264] Linking static target lib/librte_rcu.a 00:02:47.965 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.965 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.965 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.223 [99/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.223 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.223 [101/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.223 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.223 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.528 [104/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.528 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.528 [106/264] Linking static target lib/librte_meter.a 00:02:48.528 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:48.528 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.528 [109/264] Linking static target lib/librte_net.a 00:02:48.528 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.528 [111/264] Linking static target lib/librte_mbuf.a 00:02:48.812 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.812 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.812 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.071 [115/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.071 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.071 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.071 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.329 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.329 [120/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.329 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.587 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.587 [123/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.587 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.587 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.587 [126/264] Linking static target lib/librte_pci.a 00:02:49.587 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.587 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.587 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.587 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.845 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.845 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.845 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.845 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.845 [135/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.845 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.845 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.845 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.845 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.845 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.845 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.845 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.103 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.103 [144/264] Linking static target lib/librte_cmdline.a 00:02:50.103 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.103 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.361 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.361 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.361 [149/264] Linking static target lib/librte_timer.a 00:02:50.361 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.361 [151/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.619 [152/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.619 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.619 [154/264] Linking static target lib/librte_compressdev.a 00:02:50.619 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.619 [156/264] Linking static target lib/librte_ethdev.a 00:02:50.619 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.877 [158/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.877 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.878 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:50.878 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.878 [162/264] Linking static target lib/librte_hash.a 00:02:50.878 [163/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.878 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.878 [165/264] Linking static target lib/librte_dmadev.a 00:02:51.136 [166/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.136 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.136 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.136 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.136 [170/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.136 [171/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.394 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.394 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.394 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.394 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.394 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.394 [177/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.394 [178/264] Linking static target lib/librte_cryptodev.a 00:02:51.652 [179/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.652 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.652 [181/264] Linking static target lib/librte_power.a 00:02:51.652 [182/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.911 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.911 [184/264] Linking static target lib/librte_reorder.a 00:02:51.911 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.911 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.911 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.911 [188/264] Linking static target lib/librte_security.a 00:02:51.911 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.169 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.169 [191/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.427 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.427 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.427 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.427 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.685 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.685 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.685 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.685 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.685 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.944 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.944 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.944 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.944 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.944 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.945 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.203 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.203 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.203 [209/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.203 [210/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.203 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.203 [212/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.203 [213/264] Linking static target drivers/librte_bus_pci.a 00:02:53.203 [214/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.203 [215/264] Linking static target drivers/librte_bus_vdev.a 00:02:53.203 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.203 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.203 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.461 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.461 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.461 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.461 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:53.461 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.029 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.966 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.224 [226/264] Linking target lib/librte_eal.so.24.0 00:02:55.224 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:55.224 [228/264] Linking target lib/librte_meter.so.24.0 00:02:55.225 [229/264] Linking target lib/librte_ring.so.24.0 00:02:55.225 [230/264] Linking target lib/librte_timer.so.24.0 00:02:55.225 [231/264] Linking target lib/librte_pci.so.24.0 00:02:55.225 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:55.225 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:55.225 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:55.225 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:55.225 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:55.483 [237/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:55.483 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:55.483 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:55.483 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:55.483 [241/264] Linking target lib/librte_rcu.so.24.0 00:02:55.483 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:55.483 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:55.483 [244/264] Linking target lib/librte_mbuf.so.24.0 00:02:55.483 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:55.483 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:55.741 [247/264] Linking target lib/librte_net.so.24.0 00:02:55.741 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:55.741 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:55.741 [250/264] Linking target lib/librte_reorder.so.24.0 00:02:55.741 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:55.741 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:55.741 [253/264] Linking target lib/librte_hash.so.24.0 00:02:55.741 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:55.741 [255/264] Linking target lib/librte_security.so.24.0 00:02:55.999 [256/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.999 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:55.999 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:55.999 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:55.999 [260/264] Linking target lib/librte_power.so.24.0 00:02:56.568 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.568 [262/264] Linking static target lib/librte_vhost.a 00:02:57.944 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.944 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:57.944 INFO: autodetecting backend as ninja 00:02:57.944 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:58.879 CC lib/ut/ut.o 00:02:58.879 CC lib/log/log.o 00:02:58.879 CC lib/log/log_flags.o 00:02:58.879 CC lib/log/log_deprecated.o 00:02:58.879 CC lib/ut_mock/mock.o 00:02:58.879 LIB libspdk_ut_mock.a 00:02:58.879 LIB libspdk_ut.a 00:02:58.879 LIB libspdk_log.a 00:02:58.879 SO libspdk_ut_mock.so.5.0 00:02:58.879 SO libspdk_ut.so.1.0 00:02:58.879 SO libspdk_log.so.6.1 00:02:58.879 SYMLINK libspdk_ut.so 00:02:58.879 SYMLINK libspdk_ut_mock.so 00:02:58.879 SYMLINK libspdk_log.so 00:02:58.879 CC lib/util/base64.o 00:02:58.879 CC lib/util/bit_array.o 00:02:58.879 CC lib/ioat/ioat.o 00:02:58.879 CC lib/util/cpuset.o 00:02:58.879 CC lib/util/crc16.o 00:02:58.879 CC lib/util/crc32.o 00:02:58.879 CC lib/util/crc32c.o 00:02:58.879 CC lib/dma/dma.o 00:02:58.879 CXX lib/trace_parser/trace.o 00:02:59.136 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.136 CC lib/vfio_user/host/vfio_user.o 00:02:59.136 CC lib/util/crc32_ieee.o 00:02:59.136 CC lib/util/crc64.o 00:02:59.136 CC lib/util/dif.o 00:02:59.136 CC lib/util/fd.o 00:02:59.136 LIB libspdk_dma.a 00:02:59.136 CC lib/util/file.o 00:02:59.136 SO libspdk_dma.so.3.0 00:02:59.136 SYMLINK libspdk_dma.so 00:02:59.136 CC lib/util/hexlify.o 00:02:59.136 CC lib/util/iov.o 00:02:59.136 CC lib/util/math.o 00:02:59.136 LIB libspdk_ioat.a 00:02:59.136 CC lib/util/pipe.o 00:02:59.136 CC lib/util/strerror_tls.o 00:02:59.136 CC lib/util/string.o 00:02:59.136 SO libspdk_ioat.so.6.0 00:02:59.394 LIB libspdk_vfio_user.a 00:02:59.394 SO libspdk_vfio_user.so.4.0 00:02:59.394 SYMLINK libspdk_ioat.so 00:02:59.394 CC lib/util/uuid.o 00:02:59.394 CC lib/util/fd_group.o 00:02:59.394 CC lib/util/xor.o 00:02:59.394 CC lib/util/zipf.o 00:02:59.394 SYMLINK libspdk_vfio_user.so 00:02:59.652 LIB libspdk_util.a 00:02:59.652 SO libspdk_util.so.8.0 00:02:59.910 SYMLINK libspdk_util.so 00:02:59.910 LIB libspdk_trace_parser.a 00:02:59.910 SO libspdk_trace_parser.so.4.0 00:02:59.910 CC lib/json/json_parse.o 00:02:59.910 CC lib/json/json_util.o 00:02:59.910 CC lib/vmd/vmd.o 00:02:59.910 CC lib/vmd/led.o 00:02:59.910 CC lib/env_dpdk/env.o 00:02:59.910 CC lib/idxd/idxd.o 00:02:59.910 CC lib/env_dpdk/memory.o 00:02:59.910 CC lib/conf/conf.o 00:02:59.910 CC lib/rdma/common.o 00:02:59.910 SYMLINK libspdk_trace_parser.so 00:02:59.910 CC lib/env_dpdk/pci.o 00:02:59.910 CC lib/env_dpdk/init.o 00:02:59.910 CC lib/env_dpdk/threads.o 00:03:00.166 LIB libspdk_conf.a 00:03:00.166 CC lib/json/json_write.o 00:03:00.166 SO libspdk_conf.so.5.0 00:03:00.166 CC lib/rdma/rdma_verbs.o 00:03:00.166 SYMLINK libspdk_conf.so 00:03:00.166 CC lib/env_dpdk/pci_ioat.o 00:03:00.166 CC lib/env_dpdk/pci_virtio.o 00:03:00.166 CC lib/env_dpdk/pci_vmd.o 00:03:00.166 CC lib/env_dpdk/pci_idxd.o 00:03:00.166 LIB libspdk_rdma.a 00:03:00.166 SO libspdk_rdma.so.5.0 00:03:00.166 CC lib/env_dpdk/pci_event.o 00:03:00.422 CC lib/idxd/idxd_user.o 00:03:00.422 CC lib/env_dpdk/sigbus_handler.o 00:03:00.422 SYMLINK libspdk_rdma.so 00:03:00.422 CC lib/env_dpdk/pci_dpdk.o 00:03:00.422 LIB libspdk_json.a 00:03:00.422 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.422 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.422 SO libspdk_json.so.5.1 00:03:00.422 CC lib/idxd/idxd_kernel.o 00:03:00.422 SYMLINK libspdk_json.so 00:03:00.422 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.422 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.422 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.422 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.422 LIB libspdk_vmd.a 00:03:00.422 LIB libspdk_idxd.a 00:03:00.680 SO libspdk_vmd.so.5.0 00:03:00.680 SO libspdk_idxd.so.11.0 00:03:00.680 SYMLINK libspdk_vmd.so 00:03:00.680 SYMLINK libspdk_idxd.so 00:03:00.680 LIB libspdk_jsonrpc.a 00:03:00.680 SO libspdk_jsonrpc.so.5.1 00:03:00.937 SYMLINK libspdk_jsonrpc.so 00:03:00.937 CC lib/rpc/rpc.o 00:03:01.195 LIB libspdk_rpc.a 00:03:01.195 SO libspdk_rpc.so.5.0 00:03:01.195 LIB libspdk_env_dpdk.a 00:03:01.195 SYMLINK libspdk_rpc.so 00:03:01.453 SO libspdk_env_dpdk.so.13.0 00:03:01.453 CC lib/notify/notify_rpc.o 00:03:01.453 CC lib/notify/notify.o 00:03:01.453 CC lib/sock/sock_rpc.o 00:03:01.453 CC lib/sock/sock.o 00:03:01.453 SYMLINK libspdk_env_dpdk.so 00:03:01.453 CC lib/trace/trace.o 00:03:01.453 CC lib/trace/trace_rpc.o 00:03:01.453 CC lib/trace/trace_flags.o 00:03:01.453 LIB libspdk_notify.a 00:03:01.453 SO libspdk_notify.so.5.0 00:03:01.711 SYMLINK libspdk_notify.so 00:03:01.711 LIB libspdk_trace.a 00:03:01.711 SO libspdk_trace.so.9.0 00:03:01.711 SYMLINK libspdk_trace.so 00:03:01.711 LIB libspdk_sock.a 00:03:01.711 SO libspdk_sock.so.8.0 00:03:01.970 SYMLINK libspdk_sock.so 00:03:01.970 CC lib/thread/thread.o 00:03:01.970 CC lib/thread/iobuf.o 00:03:01.970 CC lib/nvme/nvme_ctrlr.o 00:03:01.970 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.970 CC lib/nvme/nvme_fabric.o 00:03:01.970 CC lib/nvme/nvme_qpair.o 00:03:01.970 CC lib/nvme/nvme_ns.o 00:03:01.970 CC lib/nvme/nvme_ns_cmd.o 00:03:01.970 CC lib/nvme/nvme_pcie.o 00:03:01.970 CC lib/nvme/nvme_pcie_common.o 00:03:02.228 CC lib/nvme/nvme.o 00:03:02.487 CC lib/nvme/nvme_quirks.o 00:03:02.745 CC lib/nvme/nvme_transport.o 00:03:02.745 CC lib/nvme/nvme_discovery.o 00:03:02.745 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.745 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.745 CC lib/nvme/nvme_tcp.o 00:03:02.745 CC lib/nvme/nvme_opal.o 00:03:03.004 CC lib/nvme/nvme_io_msg.o 00:03:03.004 CC lib/nvme/nvme_poll_group.o 00:03:03.262 CC lib/nvme/nvme_zns.o 00:03:03.262 CC lib/nvme/nvme_cuse.o 00:03:03.262 CC lib/nvme/nvme_vfio_user.o 00:03:03.262 CC lib/nvme/nvme_rdma.o 00:03:03.520 LIB libspdk_thread.a 00:03:03.520 SO libspdk_thread.so.9.0 00:03:03.520 SYMLINK libspdk_thread.so 00:03:03.520 CC lib/accel/accel.o 00:03:03.520 CC lib/blob/blobstore.o 00:03:03.520 CC lib/init/json_config.o 00:03:03.778 CC lib/init/subsystem.o 00:03:03.778 CC lib/virtio/virtio.o 00:03:03.778 CC lib/init/subsystem_rpc.o 00:03:03.778 CC lib/init/rpc.o 00:03:03.778 CC lib/virtio/virtio_vhost_user.o 00:03:03.778 CC lib/accel/accel_rpc.o 00:03:03.778 CC lib/accel/accel_sw.o 00:03:03.778 CC lib/virtio/virtio_vfio_user.o 00:03:04.036 LIB libspdk_init.a 00:03:04.037 SO libspdk_init.so.4.0 00:03:04.037 CC lib/virtio/virtio_pci.o 00:03:04.037 CC lib/blob/request.o 00:03:04.037 CC lib/blob/zeroes.o 00:03:04.037 SYMLINK libspdk_init.so 00:03:04.037 CC lib/blob/blob_bs_dev.o 00:03:04.295 CC lib/event/app.o 00:03:04.295 CC lib/event/reactor.o 00:03:04.295 CC lib/event/log_rpc.o 00:03:04.295 LIB libspdk_virtio.a 00:03:04.295 CC lib/event/app_rpc.o 00:03:04.295 CC lib/event/scheduler_static.o 00:03:04.295 SO libspdk_virtio.so.6.0 00:03:04.295 SYMLINK libspdk_virtio.so 00:03:04.554 LIB libspdk_nvme.a 00:03:04.554 SO libspdk_nvme.so.12.0 00:03:04.554 LIB libspdk_event.a 00:03:04.554 LIB libspdk_accel.a 00:03:04.554 SO libspdk_accel.so.14.0 00:03:04.812 SO libspdk_event.so.12.0 00:03:04.812 SYMLINK libspdk_event.so 00:03:04.812 SYMLINK libspdk_accel.so 00:03:04.812 SYMLINK libspdk_nvme.so 00:03:04.812 CC lib/bdev/bdev.o 00:03:04.812 CC lib/bdev/bdev_rpc.o 00:03:04.812 CC lib/bdev/scsi_nvme.o 00:03:04.812 CC lib/bdev/bdev_zone.o 00:03:04.812 CC lib/bdev/part.o 00:03:06.756 LIB libspdk_blob.a 00:03:06.756 SO libspdk_blob.so.10.1 00:03:06.756 SYMLINK libspdk_blob.so 00:03:06.756 CC lib/lvol/lvol.o 00:03:06.756 CC lib/blobfs/blobfs.o 00:03:06.756 CC lib/blobfs/tree.o 00:03:07.690 LIB libspdk_blobfs.a 00:03:07.690 SO libspdk_blobfs.so.9.0 00:03:07.690 LIB libspdk_lvol.a 00:03:07.690 SO libspdk_lvol.so.9.1 00:03:07.690 SYMLINK libspdk_blobfs.so 00:03:07.690 SYMLINK libspdk_lvol.so 00:03:07.690 LIB libspdk_bdev.a 00:03:07.690 SO libspdk_bdev.so.14.0 00:03:07.947 SYMLINK libspdk_bdev.so 00:03:07.947 CC lib/ftl/ftl_core.o 00:03:07.947 CC lib/ftl/ftl_init.o 00:03:07.947 CC lib/ftl/ftl_io.o 00:03:07.947 CC lib/ftl/ftl_sb.o 00:03:07.947 CC lib/ftl/ftl_debug.o 00:03:07.947 CC lib/ftl/ftl_layout.o 00:03:07.947 CC lib/scsi/dev.o 00:03:07.947 CC lib/nbd/nbd.o 00:03:07.947 CC lib/ublk/ublk.o 00:03:07.947 CC lib/nvmf/ctrlr.o 00:03:08.205 CC lib/ublk/ublk_rpc.o 00:03:08.205 CC lib/scsi/lun.o 00:03:08.205 CC lib/scsi/port.o 00:03:08.205 CC lib/scsi/scsi.o 00:03:08.205 CC lib/nbd/nbd_rpc.o 00:03:08.205 CC lib/scsi/scsi_bdev.o 00:03:08.205 CC lib/nvmf/ctrlr_discovery.o 00:03:08.205 CC lib/nvmf/ctrlr_bdev.o 00:03:08.205 CC lib/scsi/scsi_pr.o 00:03:08.205 CC lib/nvmf/subsystem.o 00:03:08.461 CC lib/ftl/ftl_l2p.o 00:03:08.461 LIB libspdk_nbd.a 00:03:08.461 SO libspdk_nbd.so.6.0 00:03:08.461 CC lib/nvmf/nvmf.o 00:03:08.461 SYMLINK libspdk_nbd.so 00:03:08.461 CC lib/ftl/ftl_l2p_flat.o 00:03:08.461 LIB libspdk_ublk.a 00:03:08.461 SO libspdk_ublk.so.2.0 00:03:08.461 CC lib/scsi/scsi_rpc.o 00:03:08.461 SYMLINK libspdk_ublk.so 00:03:08.461 CC lib/scsi/task.o 00:03:08.461 CC lib/ftl/ftl_nv_cache.o 00:03:08.719 CC lib/ftl/ftl_band.o 00:03:08.719 CC lib/ftl/ftl_band_ops.o 00:03:08.719 CC lib/nvmf/nvmf_rpc.o 00:03:08.719 LIB libspdk_scsi.a 00:03:08.719 SO libspdk_scsi.so.8.0 00:03:08.719 CC lib/ftl/ftl_writer.o 00:03:08.719 SYMLINK libspdk_scsi.so 00:03:08.719 CC lib/ftl/ftl_rq.o 00:03:08.976 CC lib/ftl/ftl_reloc.o 00:03:08.976 CC lib/ftl/ftl_l2p_cache.o 00:03:08.976 CC lib/nvmf/transport.o 00:03:08.976 CC lib/iscsi/conn.o 00:03:08.976 CC lib/vhost/vhost.o 00:03:09.233 CC lib/vhost/vhost_rpc.o 00:03:09.233 CC lib/vhost/vhost_scsi.o 00:03:09.233 CC lib/nvmf/tcp.o 00:03:09.233 CC lib/vhost/vhost_blk.o 00:03:09.233 CC lib/iscsi/init_grp.o 00:03:09.491 CC lib/ftl/ftl_p2l.o 00:03:09.491 CC lib/iscsi/iscsi.o 00:03:09.491 CC lib/nvmf/rdma.o 00:03:09.491 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.748 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.748 CC lib/vhost/rte_vhost_user.o 00:03:09.748 CC lib/iscsi/md5.o 00:03:09.748 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.748 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.748 CC lib/iscsi/param.o 00:03:09.748 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.006 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.006 CC lib/iscsi/portal_grp.o 00:03:10.006 CC lib/iscsi/tgt_node.o 00:03:10.006 CC lib/iscsi/iscsi_subsystem.o 00:03:10.006 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.006 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.263 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.263 CC lib/iscsi/iscsi_rpc.o 00:03:10.263 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.263 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.263 CC lib/iscsi/task.o 00:03:10.263 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.263 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.263 CC lib/ftl/utils/ftl_conf.o 00:03:10.520 CC lib/ftl/utils/ftl_md.o 00:03:10.520 CC lib/ftl/utils/ftl_mempool.o 00:03:10.520 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.520 CC lib/ftl/utils/ftl_property.o 00:03:10.520 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.520 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.520 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.520 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.520 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.520 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.779 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.779 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.779 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.779 LIB libspdk_vhost.a 00:03:10.779 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.779 CC lib/ftl/base/ftl_base_dev.o 00:03:10.779 CC lib/ftl/base/ftl_base_bdev.o 00:03:10.779 CC lib/ftl/ftl_trace.o 00:03:10.779 SO libspdk_vhost.so.7.1 00:03:10.779 SYMLINK libspdk_vhost.so 00:03:11.038 LIB libspdk_iscsi.a 00:03:11.038 LIB libspdk_ftl.a 00:03:11.038 SO libspdk_iscsi.so.7.0 00:03:11.038 SO libspdk_ftl.so.8.0 00:03:11.038 SYMLINK libspdk_iscsi.so 00:03:11.296 SYMLINK libspdk_ftl.so 00:03:11.557 LIB libspdk_nvmf.a 00:03:11.557 SO libspdk_nvmf.so.17.0 00:03:11.818 SYMLINK libspdk_nvmf.so 00:03:11.818 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.079 CC module/sock/posix/posix.o 00:03:12.079 CC module/accel/error/accel_error.o 00:03:12.079 CC module/accel/ioat/accel_ioat.o 00:03:12.079 CC module/blob/bdev/blob_bdev.o 00:03:12.079 CC module/accel/iaa/accel_iaa.o 00:03:12.079 CC module/accel/dsa/accel_dsa.o 00:03:12.079 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.079 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.079 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.079 LIB libspdk_env_dpdk_rpc.a 00:03:12.079 SO libspdk_env_dpdk_rpc.so.5.0 00:03:12.079 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.079 LIB libspdk_scheduler_gscheduler.a 00:03:12.079 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.079 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.079 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.079 SO libspdk_scheduler_gscheduler.so.3.0 00:03:12.079 LIB libspdk_scheduler_dynamic.a 00:03:12.079 CC module/accel/error/accel_error_rpc.o 00:03:12.079 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:12.079 SO libspdk_scheduler_dynamic.so.3.0 00:03:12.079 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.079 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.079 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.079 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.079 LIB libspdk_blob_bdev.a 00:03:12.079 SO libspdk_blob_bdev.so.10.1 00:03:12.079 LIB libspdk_accel_ioat.a 00:03:12.079 LIB libspdk_accel_error.a 00:03:12.340 LIB libspdk_accel_dsa.a 00:03:12.340 SO libspdk_accel_ioat.so.5.0 00:03:12.340 SO libspdk_accel_error.so.1.0 00:03:12.340 LIB libspdk_accel_iaa.a 00:03:12.340 SYMLINK libspdk_blob_bdev.so 00:03:12.340 SO libspdk_accel_dsa.so.4.0 00:03:12.340 SO libspdk_accel_iaa.so.2.0 00:03:12.340 SYMLINK libspdk_accel_ioat.so 00:03:12.340 SYMLINK libspdk_accel_error.so 00:03:12.340 SYMLINK libspdk_accel_dsa.so 00:03:12.340 SYMLINK libspdk_accel_iaa.so 00:03:12.340 CC module/bdev/null/bdev_null.o 00:03:12.340 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.340 CC module/bdev/nvme/bdev_nvme.o 00:03:12.340 CC module/bdev/error/vbdev_error.o 00:03:12.340 CC module/bdev/gpt/gpt.o 00:03:12.340 CC module/bdev/delay/vbdev_delay.o 00:03:12.340 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.340 CC module/bdev/malloc/bdev_malloc.o 00:03:12.340 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.602 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.602 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.602 LIB libspdk_sock_posix.a 00:03:12.602 CC module/bdev/null/bdev_null_rpc.o 00:03:12.602 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.602 SO libspdk_sock_posix.so.5.0 00:03:12.602 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.602 LIB libspdk_blobfs_bdev.a 00:03:12.602 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.602 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.602 SO libspdk_blobfs_bdev.so.5.0 00:03:12.602 SYMLINK libspdk_sock_posix.so 00:03:12.602 LIB libspdk_bdev_gpt.a 00:03:12.602 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.602 SO libspdk_bdev_gpt.so.5.0 00:03:12.864 SYMLINK libspdk_blobfs_bdev.so 00:03:12.864 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.864 LIB libspdk_bdev_error.a 00:03:12.864 CC module/bdev/nvme/nvme_rpc.o 00:03:12.864 SO libspdk_bdev_error.so.5.0 00:03:12.864 SYMLINK libspdk_bdev_gpt.so 00:03:12.864 LIB libspdk_bdev_null.a 00:03:12.864 SO libspdk_bdev_null.so.5.0 00:03:12.864 LIB libspdk_bdev_delay.a 00:03:12.864 SYMLINK libspdk_bdev_error.so 00:03:12.864 LIB libspdk_bdev_malloc.a 00:03:12.864 LIB libspdk_bdev_passthru.a 00:03:12.864 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.864 SO libspdk_bdev_delay.so.5.0 00:03:12.864 SYMLINK libspdk_bdev_null.so 00:03:12.864 SO libspdk_bdev_malloc.so.5.0 00:03:12.864 CC module/bdev/raid/bdev_raid.o 00:03:12.864 SO libspdk_bdev_passthru.so.5.0 00:03:12.864 SYMLINK libspdk_bdev_delay.so 00:03:12.864 CC module/bdev/nvme/vbdev_opal.o 00:03:12.864 SYMLINK libspdk_bdev_passthru.so 00:03:12.864 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.864 SYMLINK libspdk_bdev_malloc.so 00:03:12.864 CC module/bdev/split/vbdev_split.o 00:03:12.864 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.864 LIB libspdk_bdev_lvol.a 00:03:12.864 CC module/bdev/xnvme/bdev_xnvme.o 00:03:12.864 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.123 SO libspdk_bdev_lvol.so.5.0 00:03:13.123 SYMLINK libspdk_bdev_lvol.so 00:03:13.123 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:13.123 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.123 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.123 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.123 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.123 LIB libspdk_bdev_split.a 00:03:13.123 CC module/bdev/raid/raid0.o 00:03:13.123 SO libspdk_bdev_split.so.5.0 00:03:13.123 LIB libspdk_bdev_xnvme.a 00:03:13.123 LIB libspdk_bdev_zone_block.a 00:03:13.123 SO libspdk_bdev_xnvme.so.2.0 00:03:13.123 SO libspdk_bdev_zone_block.so.5.0 00:03:13.123 SYMLINK libspdk_bdev_split.so 00:03:13.123 CC module/bdev/aio/bdev_aio.o 00:03:13.382 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.382 SYMLINK libspdk_bdev_zone_block.so 00:03:13.382 SYMLINK libspdk_bdev_xnvme.so 00:03:13.382 CC module/bdev/raid/raid1.o 00:03:13.382 CC module/bdev/ftl/bdev_ftl.o 00:03:13.382 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.382 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.382 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.382 CC module/bdev/raid/concat.o 00:03:13.382 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.382 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.382 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.382 LIB libspdk_bdev_aio.a 00:03:13.640 SO libspdk_bdev_aio.so.5.0 00:03:13.640 LIB libspdk_bdev_ftl.a 00:03:13.640 SYMLINK libspdk_bdev_aio.so 00:03:13.640 LIB libspdk_bdev_iscsi.a 00:03:13.640 SO libspdk_bdev_ftl.so.5.0 00:03:13.640 SO libspdk_bdev_iscsi.so.5.0 00:03:13.640 SYMLINK libspdk_bdev_ftl.so 00:03:13.640 SYMLINK libspdk_bdev_iscsi.so 00:03:13.640 LIB libspdk_bdev_raid.a 00:03:13.898 SO libspdk_bdev_raid.so.5.0 00:03:13.898 LIB libspdk_bdev_virtio.a 00:03:13.898 SYMLINK libspdk_bdev_raid.so 00:03:13.898 SO libspdk_bdev_virtio.so.5.0 00:03:13.898 SYMLINK libspdk_bdev_virtio.so 00:03:14.464 LIB libspdk_bdev_nvme.a 00:03:14.722 SO libspdk_bdev_nvme.so.6.0 00:03:14.723 SYMLINK libspdk_bdev_nvme.so 00:03:14.997 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.997 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.997 CC module/event/subsystems/vmd/vmd.o 00:03:14.997 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.997 CC module/event/subsystems/sock/sock.o 00:03:14.997 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.997 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.997 LIB libspdk_event_sock.a 00:03:14.997 LIB libspdk_event_scheduler.a 00:03:14.997 LIB libspdk_event_vhost_blk.a 00:03:14.997 SO libspdk_event_sock.so.4.0 00:03:14.997 LIB libspdk_event_iobuf.a 00:03:14.997 LIB libspdk_event_vmd.a 00:03:14.997 SO libspdk_event_scheduler.so.3.0 00:03:14.997 SO libspdk_event_vhost_blk.so.2.0 00:03:14.997 SO libspdk_event_vmd.so.5.0 00:03:14.997 SO libspdk_event_iobuf.so.2.0 00:03:14.997 SYMLINK libspdk_event_sock.so 00:03:14.997 SYMLINK libspdk_event_scheduler.so 00:03:14.997 SYMLINK libspdk_event_vhost_blk.so 00:03:14.997 SYMLINK libspdk_event_vmd.so 00:03:14.997 SYMLINK libspdk_event_iobuf.so 00:03:15.254 CC module/event/subsystems/accel/accel.o 00:03:15.254 LIB libspdk_event_accel.a 00:03:15.512 SO libspdk_event_accel.so.5.0 00:03:15.512 SYMLINK libspdk_event_accel.so 00:03:15.512 CC module/event/subsystems/bdev/bdev.o 00:03:15.770 LIB libspdk_event_bdev.a 00:03:15.770 SO libspdk_event_bdev.so.5.0 00:03:15.770 SYMLINK libspdk_event_bdev.so 00:03:16.028 CC module/event/subsystems/ublk/ublk.o 00:03:16.028 CC module/event/subsystems/scsi/scsi.o 00:03:16.028 CC module/event/subsystems/nbd/nbd.o 00:03:16.028 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.028 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.028 LIB libspdk_event_ublk.a 00:03:16.028 LIB libspdk_event_nbd.a 00:03:16.028 LIB libspdk_event_scsi.a 00:03:16.028 SO libspdk_event_ublk.so.2.0 00:03:16.028 SO libspdk_event_nbd.so.5.0 00:03:16.028 SO libspdk_event_scsi.so.5.0 00:03:16.028 LIB libspdk_event_nvmf.a 00:03:16.028 SYMLINK libspdk_event_ublk.so 00:03:16.028 SYMLINK libspdk_event_nbd.so 00:03:16.028 SYMLINK libspdk_event_scsi.so 00:03:16.028 SO libspdk_event_nvmf.so.5.0 00:03:16.028 SYMLINK libspdk_event_nvmf.so 00:03:16.326 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.326 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.326 LIB libspdk_event_vhost_scsi.a 00:03:16.326 SO libspdk_event_vhost_scsi.so.2.0 00:03:16.326 LIB libspdk_event_iscsi.a 00:03:16.326 SO libspdk_event_iscsi.so.5.0 00:03:16.326 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.326 SYMLINK libspdk_event_iscsi.so 00:03:16.584 SO libspdk.so.5.0 00:03:16.584 SYMLINK libspdk.so 00:03:16.584 CXX app/trace/trace.o 00:03:16.584 CC app/trace_record/trace_record.o 00:03:16.584 CC app/spdk_lspci/spdk_lspci.o 00:03:16.584 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.584 CC app/nvmf_tgt/nvmf_main.o 00:03:16.584 CC app/spdk_tgt/spdk_tgt.o 00:03:16.584 CC examples/accel/perf/accel_perf.o 00:03:16.584 CC test/app/bdev_svc/bdev_svc.o 00:03:16.584 CC test/bdev/bdevio/bdevio.o 00:03:16.843 CC test/accel/dif/dif.o 00:03:16.843 LINK spdk_lspci 00:03:16.843 LINK nvmf_tgt 00:03:16.843 LINK spdk_tgt 00:03:16.843 LINK spdk_trace_record 00:03:16.843 LINK bdev_svc 00:03:16.843 LINK iscsi_tgt 00:03:16.843 LINK spdk_trace 00:03:17.102 TEST_HEADER include/spdk/accel.h 00:03:17.102 TEST_HEADER include/spdk/accel_module.h 00:03:17.102 TEST_HEADER include/spdk/assert.h 00:03:17.102 TEST_HEADER include/spdk/barrier.h 00:03:17.102 TEST_HEADER include/spdk/base64.h 00:03:17.102 TEST_HEADER include/spdk/bdev.h 00:03:17.102 TEST_HEADER include/spdk/bdev_module.h 00:03:17.102 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.102 TEST_HEADER include/spdk/bit_array.h 00:03:17.102 LINK dif 00:03:17.102 TEST_HEADER include/spdk/bit_pool.h 00:03:17.102 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.102 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.102 TEST_HEADER include/spdk/blobfs.h 00:03:17.102 TEST_HEADER include/spdk/blob.h 00:03:17.102 TEST_HEADER include/spdk/conf.h 00:03:17.102 TEST_HEADER include/spdk/config.h 00:03:17.102 TEST_HEADER include/spdk/cpuset.h 00:03:17.102 TEST_HEADER include/spdk/crc16.h 00:03:17.102 TEST_HEADER include/spdk/crc32.h 00:03:17.102 CC test/blobfs/mkfs/mkfs.o 00:03:17.102 TEST_HEADER include/spdk/crc64.h 00:03:17.102 TEST_HEADER include/spdk/dif.h 00:03:17.102 LINK bdevio 00:03:17.102 TEST_HEADER include/spdk/dma.h 00:03:17.102 TEST_HEADER include/spdk/endian.h 00:03:17.102 LINK accel_perf 00:03:17.102 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.102 TEST_HEADER include/spdk/env.h 00:03:17.102 TEST_HEADER include/spdk/event.h 00:03:17.102 TEST_HEADER include/spdk/fd_group.h 00:03:17.102 TEST_HEADER include/spdk/fd.h 00:03:17.102 TEST_HEADER include/spdk/file.h 00:03:17.102 TEST_HEADER include/spdk/ftl.h 00:03:17.102 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.102 CC test/app/histogram_perf/histogram_perf.o 00:03:17.102 CC test/dma/test_dma/test_dma.o 00:03:17.102 TEST_HEADER include/spdk/hexlify.h 00:03:17.103 CC app/spdk_nvme_perf/perf.o 00:03:17.103 TEST_HEADER include/spdk/histogram_data.h 00:03:17.103 TEST_HEADER include/spdk/idxd.h 00:03:17.103 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.103 TEST_HEADER include/spdk/init.h 00:03:17.103 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.103 TEST_HEADER include/spdk/ioat.h 00:03:17.103 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.103 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.103 TEST_HEADER include/spdk/json.h 00:03:17.103 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.103 TEST_HEADER include/spdk/likely.h 00:03:17.103 TEST_HEADER include/spdk/log.h 00:03:17.103 TEST_HEADER include/spdk/lvol.h 00:03:17.103 TEST_HEADER include/spdk/memory.h 00:03:17.103 TEST_HEADER include/spdk/mmio.h 00:03:17.103 TEST_HEADER include/spdk/nbd.h 00:03:17.103 TEST_HEADER include/spdk/notify.h 00:03:17.103 TEST_HEADER include/spdk/nvme.h 00:03:17.103 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.103 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.103 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.103 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.103 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.103 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.103 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.103 TEST_HEADER include/spdk/nvmf.h 00:03:17.103 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.103 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.103 TEST_HEADER include/spdk/opal.h 00:03:17.103 TEST_HEADER include/spdk/opal_spec.h 00:03:17.103 TEST_HEADER include/spdk/pci_ids.h 00:03:17.103 TEST_HEADER include/spdk/pipe.h 00:03:17.103 TEST_HEADER include/spdk/queue.h 00:03:17.103 TEST_HEADER include/spdk/reduce.h 00:03:17.103 TEST_HEADER include/spdk/rpc.h 00:03:17.103 TEST_HEADER include/spdk/scheduler.h 00:03:17.103 TEST_HEADER include/spdk/scsi.h 00:03:17.103 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.103 TEST_HEADER include/spdk/sock.h 00:03:17.103 TEST_HEADER include/spdk/stdinc.h 00:03:17.103 TEST_HEADER include/spdk/string.h 00:03:17.103 TEST_HEADER include/spdk/thread.h 00:03:17.103 TEST_HEADER include/spdk/trace.h 00:03:17.103 TEST_HEADER include/spdk/trace_parser.h 00:03:17.103 TEST_HEADER include/spdk/tree.h 00:03:17.103 TEST_HEADER include/spdk/ublk.h 00:03:17.103 TEST_HEADER include/spdk/util.h 00:03:17.103 TEST_HEADER include/spdk/uuid.h 00:03:17.103 TEST_HEADER include/spdk/version.h 00:03:17.103 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.103 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.103 TEST_HEADER include/spdk/vhost.h 00:03:17.103 TEST_HEADER include/spdk/vmd.h 00:03:17.103 TEST_HEADER include/spdk/xor.h 00:03:17.103 TEST_HEADER include/spdk/zipf.h 00:03:17.103 CXX test/cpp_headers/accel.o 00:03:17.103 LINK histogram_perf 00:03:17.103 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.103 CXX test/cpp_headers/accel_module.o 00:03:17.103 LINK mkfs 00:03:17.103 CXX test/cpp_headers/assert.o 00:03:17.361 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.361 CXX test/cpp_headers/barrier.o 00:03:17.361 CXX test/cpp_headers/base64.o 00:03:17.361 CC test/event/event_perf/event_perf.o 00:03:17.361 CC test/app/jsoncat/jsoncat.o 00:03:17.361 LINK nvme_fuzz 00:03:17.361 LINK test_dma 00:03:17.361 LINK hello_bdev 00:03:17.361 CC test/lvol/esnap/esnap.o 00:03:17.361 CC test/app/stub/stub.o 00:03:17.361 CXX test/cpp_headers/bdev.o 00:03:17.620 LINK event_perf 00:03:17.620 LINK jsoncat 00:03:17.620 LINK mem_callbacks 00:03:17.620 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.620 LINK stub 00:03:17.620 CC test/nvme/aer/aer.o 00:03:17.620 CXX test/cpp_headers/bdev_module.o 00:03:17.620 CC test/rpc_client/rpc_client_test.o 00:03:17.620 CC test/event/reactor/reactor.o 00:03:17.620 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.620 CC test/env/vtophys/vtophys.o 00:03:17.620 LINK spdk_nvme_perf 00:03:17.878 CXX test/cpp_headers/bdev_zone.o 00:03:17.878 LINK reactor 00:03:17.878 LINK rpc_client_test 00:03:17.878 CC test/thread/poller_perf/poller_perf.o 00:03:17.878 LINK vtophys 00:03:17.878 CC app/spdk_nvme_identify/identify.o 00:03:17.878 LINK aer 00:03:17.878 CXX test/cpp_headers/bit_array.o 00:03:17.878 LINK poller_perf 00:03:17.878 CC test/event/reactor_perf/reactor_perf.o 00:03:17.878 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.136 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.137 CC test/nvme/reset/reset.o 00:03:18.137 LINK reactor_perf 00:03:18.137 CXX test/cpp_headers/bit_pool.o 00:03:18.137 CC app/spdk_top/spdk_top.o 00:03:18.137 LINK env_dpdk_post_init 00:03:18.137 LINK spdk_nvme_discover 00:03:18.137 CXX test/cpp_headers/blob_bdev.o 00:03:18.137 CC test/event/app_repeat/app_repeat.o 00:03:18.395 LINK reset 00:03:18.395 CC test/env/memory/memory_ut.o 00:03:18.395 CC app/vhost/vhost.o 00:03:18.395 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.395 LINK app_repeat 00:03:18.395 LINK bdevperf 00:03:18.395 CC test/nvme/sgl/sgl.o 00:03:18.395 LINK vhost 00:03:18.654 CXX test/cpp_headers/blobfs.o 00:03:18.654 CC test/event/scheduler/scheduler.o 00:03:18.654 CXX test/cpp_headers/blob.o 00:03:18.654 LINK spdk_nvme_identify 00:03:18.654 CC examples/blob/hello_world/hello_blob.o 00:03:18.654 LINK sgl 00:03:18.915 CC app/spdk_dd/spdk_dd.o 00:03:18.915 CXX test/cpp_headers/conf.o 00:03:18.915 LINK scheduler 00:03:18.915 CC app/fio/nvme/fio_plugin.o 00:03:18.915 CC test/nvme/e2edp/nvme_dp.o 00:03:18.915 LINK iscsi_fuzz 00:03:18.915 LINK hello_blob 00:03:18.915 CXX test/cpp_headers/config.o 00:03:18.915 CXX test/cpp_headers/cpuset.o 00:03:18.915 LINK spdk_top 00:03:19.180 CC test/nvme/overhead/overhead.o 00:03:19.180 CXX test/cpp_headers/crc16.o 00:03:19.180 LINK nvme_dp 00:03:19.180 LINK memory_ut 00:03:19.180 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.180 LINK spdk_dd 00:03:19.180 CXX test/cpp_headers/crc32.o 00:03:19.180 CC examples/blob/cli/blobcli.o 00:03:19.180 CXX test/cpp_headers/crc64.o 00:03:19.180 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.180 CXX test/cpp_headers/dif.o 00:03:19.180 CC app/fio/bdev/fio_plugin.o 00:03:19.180 CC test/nvme/err_injection/err_injection.o 00:03:19.180 CC test/env/pci/pci_ut.o 00:03:19.438 LINK overhead 00:03:19.438 CC test/nvme/startup/startup.o 00:03:19.438 CXX test/cpp_headers/dma.o 00:03:19.438 LINK spdk_nvme 00:03:19.438 LINK err_injection 00:03:19.438 LINK startup 00:03:19.438 CXX test/cpp_headers/endian.o 00:03:19.438 LINK vhost_fuzz 00:03:19.438 CC test/nvme/reserve/reserve.o 00:03:19.438 LINK blobcli 00:03:19.438 CXX test/cpp_headers/env_dpdk.o 00:03:19.438 CXX test/cpp_headers/env.o 00:03:19.438 CC test/nvme/simple_copy/simple_copy.o 00:03:19.697 LINK pci_ut 00:03:19.697 CC test/nvme/connect_stress/connect_stress.o 00:03:19.697 CC test/nvme/boot_partition/boot_partition.o 00:03:19.697 CC test/nvme/compliance/nvme_compliance.o 00:03:19.697 LINK reserve 00:03:19.697 CXX test/cpp_headers/event.o 00:03:19.697 CC examples/ioat/perf/perf.o 00:03:19.697 LINK spdk_bdev 00:03:19.697 LINK simple_copy 00:03:19.697 LINK boot_partition 00:03:19.697 LINK connect_stress 00:03:19.697 CC examples/ioat/verify/verify.o 00:03:19.697 CXX test/cpp_headers/fd_group.o 00:03:19.697 CXX test/cpp_headers/fd.o 00:03:19.955 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.955 LINK ioat_perf 00:03:19.955 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.955 CXX test/cpp_headers/file.o 00:03:19.955 CC test/nvme/fdp/fdp.o 00:03:19.955 LINK nvme_compliance 00:03:19.955 CC test/nvme/cuse/cuse.o 00:03:19.955 LINK verify 00:03:19.955 LINK fused_ordering 00:03:19.955 CC examples/nvme/hello_world/hello_world.o 00:03:19.955 CXX test/cpp_headers/ftl.o 00:03:19.955 LINK doorbell_aers 00:03:19.955 CC examples/nvme/reconnect/reconnect.o 00:03:20.214 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.214 CC examples/sock/hello_world/hello_sock.o 00:03:20.214 LINK hello_world 00:03:20.214 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.214 LINK fdp 00:03:20.214 CXX test/cpp_headers/gpt_spec.o 00:03:20.214 CXX test/cpp_headers/hexlify.o 00:03:20.214 LINK lsvmd 00:03:20.214 CXX test/cpp_headers/histogram_data.o 00:03:20.214 LINK reconnect 00:03:20.214 CC examples/vmd/led/led.o 00:03:20.473 CXX test/cpp_headers/idxd.o 00:03:20.473 LINK hello_sock 00:03:20.473 LINK led 00:03:20.473 CC examples/util/zipf/zipf.o 00:03:20.473 CC examples/thread/thread/thread_ex.o 00:03:20.473 CC examples/nvmf/nvmf/nvmf.o 00:03:20.473 CC examples/nvme/arbitration/arbitration.o 00:03:20.473 CXX test/cpp_headers/idxd_spec.o 00:03:20.473 LINK zipf 00:03:20.473 LINK nvme_manage 00:03:20.473 CXX test/cpp_headers/init.o 00:03:20.473 CC examples/idxd/perf/perf.o 00:03:20.731 LINK thread 00:03:20.731 LINK nvmf 00:03:20.731 CXX test/cpp_headers/ioat.o 00:03:20.731 LINK cuse 00:03:20.731 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.731 CC examples/nvme/hotplug/hotplug.o 00:03:20.731 LINK arbitration 00:03:20.731 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.731 CXX test/cpp_headers/ioat_spec.o 00:03:20.731 CXX test/cpp_headers/iscsi_spec.o 00:03:20.731 CXX test/cpp_headers/json.o 00:03:20.989 LINK interrupt_tgt 00:03:20.989 CXX test/cpp_headers/jsonrpc.o 00:03:20.989 LINK cmb_copy 00:03:20.989 LINK idxd_perf 00:03:20.989 CXX test/cpp_headers/likely.o 00:03:20.989 CC examples/nvme/abort/abort.o 00:03:20.989 LINK hotplug 00:03:20.989 CXX test/cpp_headers/log.o 00:03:20.989 CXX test/cpp_headers/lvol.o 00:03:20.989 CXX test/cpp_headers/memory.o 00:03:20.989 CXX test/cpp_headers/mmio.o 00:03:20.989 CXX test/cpp_headers/nbd.o 00:03:20.989 CXX test/cpp_headers/notify.o 00:03:20.989 CXX test/cpp_headers/nvme.o 00:03:20.989 CXX test/cpp_headers/nvme_intel.o 00:03:20.989 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.989 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.989 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.989 CXX test/cpp_headers/nvme_spec.o 00:03:21.248 CXX test/cpp_headers/nvme_zns.o 00:03:21.248 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.248 LINK pmr_persistence 00:03:21.248 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.248 CXX test/cpp_headers/nvmf.o 00:03:21.248 CXX test/cpp_headers/nvmf_spec.o 00:03:21.248 CXX test/cpp_headers/nvmf_transport.o 00:03:21.248 CXX test/cpp_headers/opal.o 00:03:21.248 CXX test/cpp_headers/opal_spec.o 00:03:21.248 CXX test/cpp_headers/pci_ids.o 00:03:21.248 LINK abort 00:03:21.248 CXX test/cpp_headers/pipe.o 00:03:21.248 CXX test/cpp_headers/queue.o 00:03:21.248 LINK esnap 00:03:21.248 CXX test/cpp_headers/reduce.o 00:03:21.248 CXX test/cpp_headers/rpc.o 00:03:21.248 CXX test/cpp_headers/scheduler.o 00:03:21.248 CXX test/cpp_headers/scsi.o 00:03:21.248 CXX test/cpp_headers/scsi_spec.o 00:03:21.506 CXX test/cpp_headers/sock.o 00:03:21.506 CXX test/cpp_headers/stdinc.o 00:03:21.506 CXX test/cpp_headers/string.o 00:03:21.506 CXX test/cpp_headers/thread.o 00:03:21.506 CXX test/cpp_headers/trace.o 00:03:21.506 CXX test/cpp_headers/trace_parser.o 00:03:21.506 CXX test/cpp_headers/tree.o 00:03:21.506 CXX test/cpp_headers/ublk.o 00:03:21.506 CXX test/cpp_headers/util.o 00:03:21.506 CXX test/cpp_headers/uuid.o 00:03:21.506 CXX test/cpp_headers/version.o 00:03:21.506 CXX test/cpp_headers/vfio_user_pci.o 00:03:21.506 CXX test/cpp_headers/vfio_user_spec.o 00:03:21.506 CXX test/cpp_headers/vhost.o 00:03:21.506 CXX test/cpp_headers/vmd.o 00:03:21.506 CXX test/cpp_headers/xor.o 00:03:21.506 CXX test/cpp_headers/zipf.o 00:03:21.765 00:03:21.765 real 0m46.615s 00:03:21.765 user 4m44.982s 00:03:21.765 sys 0m56.762s 00:03:21.765 07:18:30 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:21.765 ************************************ 00:03:21.765 END TEST make 00:03:21.765 ************************************ 00:03:21.765 07:18:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.766 07:18:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:21.766 07:18:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:21.766 07:18:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:21.766 07:18:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:21.766 07:18:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:21.766 07:18:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:21.766 07:18:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:21.766 07:18:30 -- scripts/common.sh@335 -- # IFS=.-: 00:03:21.766 07:18:30 -- scripts/common.sh@335 -- # read -ra ver1 00:03:21.766 07:18:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.766 07:18:30 -- scripts/common.sh@336 -- # read -ra ver2 00:03:21.766 07:18:30 -- scripts/common.sh@337 -- # local 'op=<' 00:03:21.766 07:18:30 -- scripts/common.sh@339 -- # ver1_l=2 00:03:21.766 07:18:30 -- scripts/common.sh@340 -- # ver2_l=1 00:03:21.766 07:18:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:21.766 07:18:30 -- scripts/common.sh@343 -- # case "$op" in 00:03:21.766 07:18:30 -- scripts/common.sh@344 -- # : 1 00:03:21.766 07:18:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:21.766 07:18:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.766 07:18:30 -- scripts/common.sh@364 -- # decimal 1 00:03:21.766 07:18:30 -- scripts/common.sh@352 -- # local d=1 00:03:21.766 07:18:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.766 07:18:30 -- scripts/common.sh@354 -- # echo 1 00:03:21.766 07:18:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:21.766 07:18:30 -- scripts/common.sh@365 -- # decimal 2 00:03:21.766 07:18:30 -- scripts/common.sh@352 -- # local d=2 00:03:21.766 07:18:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.766 07:18:30 -- scripts/common.sh@354 -- # echo 2 00:03:21.766 07:18:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:21.766 07:18:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:21.766 07:18:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:21.766 07:18:30 -- scripts/common.sh@367 -- # return 0 00:03:21.766 07:18:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.766 07:18:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:21.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.766 --rc genhtml_branch_coverage=1 00:03:21.766 --rc genhtml_function_coverage=1 00:03:21.766 --rc genhtml_legend=1 00:03:21.766 --rc geninfo_all_blocks=1 00:03:21.766 --rc geninfo_unexecuted_blocks=1 00:03:21.766 00:03:21.766 ' 00:03:21.766 07:18:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:21.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.766 --rc genhtml_branch_coverage=1 00:03:21.766 --rc genhtml_function_coverage=1 00:03:21.766 --rc genhtml_legend=1 00:03:21.766 --rc geninfo_all_blocks=1 00:03:21.766 --rc geninfo_unexecuted_blocks=1 00:03:21.766 00:03:21.766 ' 00:03:21.766 07:18:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:21.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.766 --rc genhtml_branch_coverage=1 00:03:21.766 --rc genhtml_function_coverage=1 00:03:21.766 --rc genhtml_legend=1 00:03:21.766 --rc geninfo_all_blocks=1 00:03:21.766 --rc geninfo_unexecuted_blocks=1 00:03:21.766 00:03:21.766 ' 00:03:21.766 07:18:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:21.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.766 --rc genhtml_branch_coverage=1 00:03:21.766 --rc genhtml_function_coverage=1 00:03:21.766 --rc genhtml_legend=1 00:03:21.766 --rc geninfo_all_blocks=1 00:03:21.766 --rc geninfo_unexecuted_blocks=1 00:03:21.766 00:03:21.766 ' 00:03:21.766 07:18:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.766 07:18:30 -- nvmf/common.sh@7 -- # uname -s 00:03:21.766 07:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.766 07:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.766 07:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.766 07:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.766 07:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.766 07:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.766 07:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.766 07:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.766 07:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.766 07:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.766 07:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:323a6621-08ea-4853-8a15-1f16326b6ad3 00:03:21.766 07:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=323a6621-08ea-4853-8a15-1f16326b6ad3 00:03:21.766 07:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.766 07:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.766 07:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:21.766 07:18:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.766 07:18:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.766 07:18:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.766 07:18:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.766 07:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.766 07:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.766 07:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.766 07:18:30 -- paths/export.sh@5 -- # export PATH 00:03:21.766 07:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.766 07:18:30 -- nvmf/common.sh@46 -- # : 0 00:03:21.766 07:18:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:21.766 07:18:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:21.766 07:18:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:21.766 07:18:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.766 07:18:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.766 07:18:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:21.766 07:18:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:21.766 07:18:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:21.766 07:18:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.766 07:18:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.766 07:18:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.766 07:18:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.766 07:18:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.766 07:18:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.766 07:18:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.766 07:18:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.766 07:18:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.766 07:18:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.766 07:18:31 -- spdk/autotest.sh@48 -- # udevadm_pid=48151 00:03:21.766 07:18:31 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.025 07:18:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.025 07:18:31 -- spdk/autotest.sh@54 -- # echo 48174 00:03:22.025 07:18:31 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.025 07:18:31 -- spdk/autotest.sh@56 -- # echo 48181 00:03:22.025 07:18:31 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:22.025 07:18:31 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:22.025 07:18:31 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.025 07:18:31 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:22.025 07:18:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.025 07:18:31 -- common/autotest_common.sh@10 -- # set +x 00:03:22.025 07:18:31 -- spdk/autotest.sh@70 -- # create_test_list 00:03:22.025 07:18:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:22.025 07:18:31 -- common/autotest_common.sh@10 -- # set +x 00:03:22.025 07:18:31 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.025 07:18:31 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.025 07:18:31 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.025 07:18:31 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.025 07:18:31 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.025 07:18:31 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:22.025 07:18:31 -- common/autotest_common.sh@1450 -- # uname 00:03:22.025 07:18:31 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:22.025 07:18:31 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:22.025 07:18:31 -- common/autotest_common.sh@1470 -- # uname 00:03:22.025 07:18:31 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:22.025 07:18:31 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:22.025 07:18:31 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.025 lcov: LCOV version 1.15 00:03:22.025 07:18:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:30.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:30.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:30.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:30.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:30.153 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:30.153 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:52.127 07:18:58 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:52.127 07:18:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.127 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:03:52.127 07:18:58 -- spdk/autotest.sh@89 -- # rm -f 00:03:52.127 07:18:58 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.127 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:52.127 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:52.127 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:52.127 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:52.127 07:18:59 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:52.127 07:18:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:52.127 07:18:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:52.127 07:18:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.127 07:18:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:52.127 07:18:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:52.127 07:18:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # grep -v p 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.127 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:52.127 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:52.127 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.127 No valid GPT data, bailing 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # pt= 00:03:52.127 07:18:59 -- scripts/common.sh@394 -- # return 1 00:03:52.127 07:18:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.127 1+0 records in 00:03:52.127 1+0 records out 00:03:52.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288879 s, 36.3 MB/s 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.127 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:52.127 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:52.127 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:52.127 No valid GPT data, bailing 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # pt= 00:03:52.127 07:18:59 -- scripts/common.sh@394 -- # return 1 00:03:52.127 07:18:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:52.127 1+0 records in 00:03:52.127 1+0 records out 00:03:52.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554241 s, 189 MB/s 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.127 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:52.127 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:52.127 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:52.127 No valid GPT data, bailing 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # pt= 00:03:52.127 07:18:59 -- scripts/common.sh@394 -- # return 1 00:03:52.127 07:18:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:52.127 1+0 records in 00:03:52.127 1+0 records out 00:03:52.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605716 s, 173 MB/s 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.127 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:52.127 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:52.127 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:52.127 No valid GPT data, bailing 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:52.127 07:18:59 -- scripts/common.sh@393 -- # pt= 00:03:52.127 07:18:59 -- scripts/common.sh@394 -- # return 1 00:03:52.127 07:18:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:52.127 1+0 records in 00:03:52.127 1+0 records out 00:03:52.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612239 s, 171 MB/s 00:03:52.127 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.127 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.127 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:52.127 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:52.127 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:52.127 No valid GPT data, bailing 00:03:52.128 07:18:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:52.128 07:18:59 -- scripts/common.sh@393 -- # pt= 00:03:52.128 07:18:59 -- scripts/common.sh@394 -- # return 1 00:03:52.128 07:18:59 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:52.128 1+0 records in 00:03:52.128 1+0 records out 00:03:52.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633072 s, 166 MB/s 00:03:52.128 07:18:59 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.128 07:18:59 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.128 07:18:59 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:52.128 07:18:59 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:52.128 07:18:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:52.128 No valid GPT data, bailing 00:03:52.128 07:19:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:52.128 07:19:00 -- scripts/common.sh@393 -- # pt= 00:03:52.128 07:19:00 -- scripts/common.sh@394 -- # return 1 00:03:52.128 07:19:00 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:52.128 1+0 records in 00:03:52.128 1+0 records out 00:03:52.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600177 s, 175 MB/s 00:03:52.128 07:19:00 -- spdk/autotest.sh@116 -- # sync 00:03:52.128 07:19:00 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.128 07:19:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.128 07:19:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.072 07:19:02 -- spdk/autotest.sh@122 -- # uname -s 00:03:53.072 07:19:02 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:53.072 07:19:02 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.072 07:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.072 07:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.072 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:03:53.072 ************************************ 00:03:53.072 START TEST setup.sh 00:03:53.072 ************************************ 00:03:53.072 07:19:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.072 * Looking for test storage... 00:03:53.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.072 07:19:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:53.072 07:19:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:53.072 07:19:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:53.072 07:19:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:53.072 07:19:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:53.072 07:19:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:53.072 07:19:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:53.072 07:19:02 -- scripts/common.sh@335 -- # IFS=.-: 00:03:53.072 07:19:02 -- scripts/common.sh@335 -- # read -ra ver1 00:03:53.072 07:19:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.072 07:19:02 -- scripts/common.sh@336 -- # read -ra ver2 00:03:53.072 07:19:02 -- scripts/common.sh@337 -- # local 'op=<' 00:03:53.072 07:19:02 -- scripts/common.sh@339 -- # ver1_l=2 00:03:53.072 07:19:02 -- scripts/common.sh@340 -- # ver2_l=1 00:03:53.072 07:19:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:53.072 07:19:02 -- scripts/common.sh@343 -- # case "$op" in 00:03:53.072 07:19:02 -- scripts/common.sh@344 -- # : 1 00:03:53.072 07:19:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:53.072 07:19:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.072 07:19:02 -- scripts/common.sh@364 -- # decimal 1 00:03:53.072 07:19:02 -- scripts/common.sh@352 -- # local d=1 00:03:53.072 07:19:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.072 07:19:02 -- scripts/common.sh@354 -- # echo 1 00:03:53.072 07:19:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:53.072 07:19:02 -- scripts/common.sh@365 -- # decimal 2 00:03:53.072 07:19:02 -- scripts/common.sh@352 -- # local d=2 00:03:53.072 07:19:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.072 07:19:02 -- scripts/common.sh@354 -- # echo 2 00:03:53.072 07:19:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:53.072 07:19:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:53.073 07:19:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:53.073 07:19:02 -- scripts/common.sh@367 -- # return 0 00:03:53.073 07:19:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.073 07:19:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.073 --rc genhtml_branch_coverage=1 00:03:53.073 --rc genhtml_function_coverage=1 00:03:53.073 --rc genhtml_legend=1 00:03:53.073 --rc geninfo_all_blocks=1 00:03:53.073 --rc geninfo_unexecuted_blocks=1 00:03:53.073 00:03:53.073 ' 00:03:53.073 07:19:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.073 --rc genhtml_branch_coverage=1 00:03:53.073 --rc genhtml_function_coverage=1 00:03:53.073 --rc genhtml_legend=1 00:03:53.073 --rc geninfo_all_blocks=1 00:03:53.073 --rc geninfo_unexecuted_blocks=1 00:03:53.073 00:03:53.073 ' 00:03:53.073 07:19:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.073 --rc genhtml_branch_coverage=1 00:03:53.073 --rc genhtml_function_coverage=1 00:03:53.073 --rc genhtml_legend=1 00:03:53.073 --rc geninfo_all_blocks=1 00:03:53.073 --rc geninfo_unexecuted_blocks=1 00:03:53.073 00:03:53.073 ' 00:03:53.073 07:19:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.073 --rc genhtml_branch_coverage=1 00:03:53.073 --rc genhtml_function_coverage=1 00:03:53.073 --rc genhtml_legend=1 00:03:53.073 --rc geninfo_all_blocks=1 00:03:53.073 --rc geninfo_unexecuted_blocks=1 00:03:53.073 00:03:53.073 ' 00:03:53.073 07:19:02 -- setup/test-setup.sh@10 -- # uname -s 00:03:53.073 07:19:02 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:53.073 07:19:02 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:53.073 07:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.073 07:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.073 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:03:53.073 ************************************ 00:03:53.073 START TEST acl 00:03:53.073 ************************************ 00:03:53.073 07:19:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:53.073 * Looking for test storage... 00:03:53.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.335 07:19:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:53.335 07:19:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:53.335 07:19:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:53.335 07:19:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:53.335 07:19:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:53.335 07:19:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:53.335 07:19:02 -- scripts/common.sh@335 -- # IFS=.-: 00:03:53.335 07:19:02 -- scripts/common.sh@335 -- # read -ra ver1 00:03:53.335 07:19:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.335 07:19:02 -- scripts/common.sh@336 -- # read -ra ver2 00:03:53.335 07:19:02 -- scripts/common.sh@337 -- # local 'op=<' 00:03:53.335 07:19:02 -- scripts/common.sh@339 -- # ver1_l=2 00:03:53.335 07:19:02 -- scripts/common.sh@340 -- # ver2_l=1 00:03:53.335 07:19:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:53.335 07:19:02 -- scripts/common.sh@343 -- # case "$op" in 00:03:53.335 07:19:02 -- scripts/common.sh@344 -- # : 1 00:03:53.335 07:19:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:53.335 07:19:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.335 07:19:02 -- scripts/common.sh@364 -- # decimal 1 00:03:53.335 07:19:02 -- scripts/common.sh@352 -- # local d=1 00:03:53.335 07:19:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.335 07:19:02 -- scripts/common.sh@354 -- # echo 1 00:03:53.335 07:19:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:53.335 07:19:02 -- scripts/common.sh@365 -- # decimal 2 00:03:53.335 07:19:02 -- scripts/common.sh@352 -- # local d=2 00:03:53.335 07:19:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.335 07:19:02 -- scripts/common.sh@354 -- # echo 2 00:03:53.335 07:19:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:53.335 07:19:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:53.335 07:19:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:53.335 07:19:02 -- scripts/common.sh@367 -- # return 0 00:03:53.335 07:19:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.335 07:19:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.335 --rc genhtml_branch_coverage=1 00:03:53.335 --rc genhtml_function_coverage=1 00:03:53.335 --rc genhtml_legend=1 00:03:53.335 --rc geninfo_all_blocks=1 00:03:53.335 --rc geninfo_unexecuted_blocks=1 00:03:53.335 00:03:53.335 ' 00:03:53.335 07:19:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.335 --rc genhtml_branch_coverage=1 00:03:53.335 --rc genhtml_function_coverage=1 00:03:53.335 --rc genhtml_legend=1 00:03:53.335 --rc geninfo_all_blocks=1 00:03:53.335 --rc geninfo_unexecuted_blocks=1 00:03:53.335 00:03:53.335 ' 00:03:53.335 07:19:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.335 --rc genhtml_branch_coverage=1 00:03:53.335 --rc genhtml_function_coverage=1 00:03:53.335 --rc genhtml_legend=1 00:03:53.335 --rc geninfo_all_blocks=1 00:03:53.335 --rc geninfo_unexecuted_blocks=1 00:03:53.335 00:03:53.335 ' 00:03:53.335 07:19:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:53.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.335 --rc genhtml_branch_coverage=1 00:03:53.335 --rc genhtml_function_coverage=1 00:03:53.335 --rc genhtml_legend=1 00:03:53.335 --rc geninfo_all_blocks=1 00:03:53.335 --rc geninfo_unexecuted_blocks=1 00:03:53.335 00:03:53.335 ' 00:03:53.335 07:19:02 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:53.335 07:19:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:53.335 07:19:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:53.335 07:19:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.335 07:19:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:53.335 07:19:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:53.335 07:19:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.335 07:19:02 -- setup/acl.sh@12 -- # devs=() 00:03:53.335 07:19:02 -- setup/acl.sh@12 -- # declare -a devs 00:03:53.335 07:19:02 -- setup/acl.sh@13 -- # drivers=() 00:03:53.335 07:19:02 -- setup/acl.sh@13 -- # declare -A drivers 00:03:53.335 07:19:02 -- setup/acl.sh@51 -- # setup reset 00:03:53.335 07:19:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.335 07:19:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.281 07:19:03 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:54.281 07:19:03 -- setup/acl.sh@16 -- # local dev driver 00:03:54.281 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.281 07:19:03 -- setup/acl.sh@15 -- # setup output status 00:03:54.281 07:19:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.281 07:19:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:54.543 Hugepages 00:03:54.543 node hugesize free / total 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # continue 00:03:54.543 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.543 00:03:54.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # continue 00:03:54.543 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:54.543 07:19:03 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:54.543 07:19:03 -- setup/acl.sh@20 -- # continue 00:03:54.543 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.543 07:19:03 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:54.543 07:19:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.543 07:19:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:54.543 07:19:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.543 07:19:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.543 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.804 07:19:03 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.805 07:19:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.805 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.805 07:19:03 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.805 07:19:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.805 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.805 07:19:03 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.805 07:19:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.805 07:19:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.805 07:19:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.805 07:19:03 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:54.805 07:19:03 -- setup/acl.sh@54 -- # run_test denied denied 00:03:54.805 07:19:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.805 07:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.805 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:03:54.805 ************************************ 00:03:54.805 START TEST denied 00:03:54.805 ************************************ 00:03:54.805 07:19:03 -- common/autotest_common.sh@1114 -- # denied 00:03:54.805 07:19:03 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:54.805 07:19:03 -- setup/acl.sh@38 -- # setup output config 00:03:54.805 07:19:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.805 07:19:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.805 07:19:03 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:56.193 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:56.193 07:19:05 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:56.193 07:19:05 -- setup/acl.sh@28 -- # local dev driver 00:03:56.193 07:19:05 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.193 07:19:05 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:56.193 07:19:05 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:56.193 07:19:05 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.193 07:19:05 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.193 07:19:05 -- setup/acl.sh@41 -- # setup reset 00:03:56.193 07:19:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.193 07:19:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.809 00:04:02.809 real 0m7.045s 00:04:02.809 user 0m0.737s 00:04:02.809 sys 0m1.123s 00:04:02.809 07:19:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.809 ************************************ 00:04:02.809 END TEST denied 00:04:02.809 ************************************ 00:04:02.809 07:19:11 -- common/autotest_common.sh@10 -- # set +x 00:04:02.809 07:19:11 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:02.809 07:19:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.809 07:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.809 07:19:11 -- common/autotest_common.sh@10 -- # set +x 00:04:02.809 ************************************ 00:04:02.809 START TEST allowed 00:04:02.809 ************************************ 00:04:02.809 07:19:11 -- common/autotest_common.sh@1114 -- # allowed 00:04:02.809 07:19:11 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:02.809 07:19:11 -- setup/acl.sh@45 -- # setup output config 00:04:02.809 07:19:11 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:02.809 07:19:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.809 07:19:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.071 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.071 07:19:12 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:03.071 07:19:12 -- setup/acl.sh@28 -- # local dev driver 00:04:03.071 07:19:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:03.071 07:19:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:03.071 07:19:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:03.071 07:19:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:03.071 07:19:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:03.071 07:19:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:03.071 07:19:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:03.071 07:19:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:03.071 07:19:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:03.071 07:19:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:03.071 07:19:12 -- setup/acl.sh@48 -- # setup reset 00:04:03.071 07:19:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.071 07:19:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.013 00:04:04.013 real 0m2.100s 00:04:04.013 user 0m0.826s 00:04:04.013 sys 0m1.023s 00:04:04.013 07:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.013 ************************************ 00:04:04.013 END TEST allowed 00:04:04.013 ************************************ 00:04:04.013 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:04.013 00:04:04.013 real 0m10.943s 00:04:04.013 user 0m2.282s 00:04:04.013 sys 0m3.077s 00:04:04.013 07:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.013 ************************************ 00:04:04.013 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:04.013 END TEST acl 00:04:04.013 ************************************ 00:04:04.013 07:19:13 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:04.013 07:19:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.013 07:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.013 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:04.013 ************************************ 00:04:04.013 START TEST hugepages 00:04:04.013 ************************************ 00:04:04.013 07:19:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:04.275 * Looking for test storage... 00:04:04.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.275 07:19:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:04.275 07:19:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:04.275 07:19:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:04.275 07:19:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:04.275 07:19:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:04.275 07:19:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:04.275 07:19:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:04.275 07:19:13 -- scripts/common.sh@335 -- # IFS=.-: 00:04:04.275 07:19:13 -- scripts/common.sh@335 -- # read -ra ver1 00:04:04.275 07:19:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.275 07:19:13 -- scripts/common.sh@336 -- # read -ra ver2 00:04:04.275 07:19:13 -- scripts/common.sh@337 -- # local 'op=<' 00:04:04.275 07:19:13 -- scripts/common.sh@339 -- # ver1_l=2 00:04:04.275 07:19:13 -- scripts/common.sh@340 -- # ver2_l=1 00:04:04.275 07:19:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:04.275 07:19:13 -- scripts/common.sh@343 -- # case "$op" in 00:04:04.275 07:19:13 -- scripts/common.sh@344 -- # : 1 00:04:04.275 07:19:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:04.275 07:19:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.275 07:19:13 -- scripts/common.sh@364 -- # decimal 1 00:04:04.275 07:19:13 -- scripts/common.sh@352 -- # local d=1 00:04:04.275 07:19:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.275 07:19:13 -- scripts/common.sh@354 -- # echo 1 00:04:04.275 07:19:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:04.275 07:19:13 -- scripts/common.sh@365 -- # decimal 2 00:04:04.275 07:19:13 -- scripts/common.sh@352 -- # local d=2 00:04:04.275 07:19:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.275 07:19:13 -- scripts/common.sh@354 -- # echo 2 00:04:04.275 07:19:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:04.275 07:19:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:04.275 07:19:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:04.275 07:19:13 -- scripts/common.sh@367 -- # return 0 00:04:04.275 07:19:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.275 07:19:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.275 --rc genhtml_branch_coverage=1 00:04:04.275 --rc genhtml_function_coverage=1 00:04:04.275 --rc genhtml_legend=1 00:04:04.275 --rc geninfo_all_blocks=1 00:04:04.275 --rc geninfo_unexecuted_blocks=1 00:04:04.275 00:04:04.275 ' 00:04:04.275 07:19:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.275 --rc genhtml_branch_coverage=1 00:04:04.275 --rc genhtml_function_coverage=1 00:04:04.275 --rc genhtml_legend=1 00:04:04.275 --rc geninfo_all_blocks=1 00:04:04.275 --rc geninfo_unexecuted_blocks=1 00:04:04.275 00:04:04.275 ' 00:04:04.275 07:19:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.275 --rc genhtml_branch_coverage=1 00:04:04.275 --rc genhtml_function_coverage=1 00:04:04.275 --rc genhtml_legend=1 00:04:04.275 --rc geninfo_all_blocks=1 00:04:04.275 --rc geninfo_unexecuted_blocks=1 00:04:04.275 00:04:04.275 ' 00:04:04.275 07:19:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:04.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.275 --rc genhtml_branch_coverage=1 00:04:04.275 --rc genhtml_function_coverage=1 00:04:04.275 --rc genhtml_legend=1 00:04:04.275 --rc geninfo_all_blocks=1 00:04:04.275 --rc geninfo_unexecuted_blocks=1 00:04:04.275 00:04:04.275 ' 00:04:04.275 07:19:13 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:04.275 07:19:13 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:04.275 07:19:13 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:04.275 07:19:13 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:04.275 07:19:13 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:04.275 07:19:13 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:04.275 07:19:13 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:04.275 07:19:13 -- setup/common.sh@18 -- # local node= 00:04:04.275 07:19:13 -- setup/common.sh@19 -- # local var val 00:04:04.275 07:19:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.275 07:19:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.275 07:19:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.275 07:19:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.276 07:19:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.276 07:19:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5798092 kB' 'MemAvailable: 7354276 kB' 'Buffers: 2684 kB' 'Cached: 1769040 kB' 'SwapCached: 0 kB' 'Active: 465404 kB' 'Inactive: 1421984 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1421984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117360 kB' 'Mapped: 51040 kB' 'Shmem: 10532 kB' 'KReclaimable: 63836 kB' 'Slab: 162344 kB' 'SReclaimable: 63836 kB' 'SUnreclaim: 98508 kB' 'KernelStack: 6592 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 310776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.276 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.276 07:19:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # continue 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.277 07:19:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.277 07:19:13 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:04.277 07:19:13 -- setup/common.sh@33 -- # echo 2048 00:04:04.277 07:19:13 -- setup/common.sh@33 -- # return 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:04.277 07:19:13 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:04.277 07:19:13 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:04.277 07:19:13 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:04.277 07:19:13 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:04.277 07:19:13 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:04.277 07:19:13 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:04.277 07:19:13 -- setup/hugepages.sh@207 -- # get_nodes 00:04:04.277 07:19:13 -- setup/hugepages.sh@27 -- # local node 00:04:04.277 07:19:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.277 07:19:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:04.277 07:19:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.277 07:19:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.277 07:19:13 -- setup/hugepages.sh@208 -- # clear_hp 00:04:04.277 07:19:13 -- setup/hugepages.sh@37 -- # local node hp 00:04:04.277 07:19:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:04.277 07:19:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.277 07:19:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.277 07:19:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:04.277 07:19:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:04.277 07:19:13 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:04.277 07:19:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.277 07:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.277 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:04.277 ************************************ 00:04:04.277 START TEST default_setup 00:04:04.277 ************************************ 00:04:04.277 07:19:13 -- common/autotest_common.sh@1114 -- # default_setup 00:04:04.277 07:19:13 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.277 07:19:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.277 07:19:13 -- setup/hugepages.sh@51 -- # shift 00:04:04.277 07:19:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.277 07:19:13 -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.277 07:19:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.277 07:19:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.277 07:19:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.277 07:19:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.277 07:19:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.277 07:19:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.277 07:19:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.277 07:19:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.277 07:19:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.277 07:19:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.277 07:19:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.277 07:19:13 -- setup/hugepages.sh@73 -- # return 0 00:04:04.277 07:19:13 -- setup/hugepages.sh@137 -- # setup output 00:04:04.277 07:19:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.277 07:19:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.486 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.486 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.486 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.486 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.486 07:19:14 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:05.486 07:19:14 -- setup/hugepages.sh@89 -- # local node 00:04:05.486 07:19:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.486 07:19:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.486 07:19:14 -- setup/hugepages.sh@92 -- # local surp 00:04:05.486 07:19:14 -- setup/hugepages.sh@93 -- # local resv 00:04:05.486 07:19:14 -- setup/hugepages.sh@94 -- # local anon 00:04:05.486 07:19:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.486 07:19:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.486 07:19:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.486 07:19:14 -- setup/common.sh@18 -- # local node= 00:04:05.486 07:19:14 -- setup/common.sh@19 -- # local var val 00:04:05.486 07:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.486 07:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.486 07:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.486 07:19:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.486 07:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.486 07:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.486 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.486 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.486 07:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7928776 kB' 'MemAvailable: 9484744 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467404 kB' 'Inactive: 1421996 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1421996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 50928 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 162000 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98620 kB' 'KernelStack: 6624 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:05.486 07:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.487 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.487 07:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.487 07:19:14 -- setup/common.sh@33 -- # echo 0 00:04:05.487 07:19:14 -- setup/common.sh@33 -- # return 0 00:04:05.487 07:19:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.488 07:19:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.488 07:19:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.488 07:19:14 -- setup/common.sh@18 -- # local node= 00:04:05.488 07:19:14 -- setup/common.sh@19 -- # local var val 00:04:05.488 07:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.488 07:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.488 07:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.488 07:19:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.488 07:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.488 07:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929492 kB' 'MemAvailable: 9485472 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467240 kB' 'Inactive: 1422008 kB' 'Active(anon): 128032 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119176 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161832 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98452 kB' 'KernelStack: 6624 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.488 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.488 07:19:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.489 07:19:14 -- setup/common.sh@33 -- # echo 0 00:04:05.489 07:19:14 -- setup/common.sh@33 -- # return 0 00:04:05.489 07:19:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.489 07:19:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.489 07:19:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.489 07:19:14 -- setup/common.sh@18 -- # local node= 00:04:05.489 07:19:14 -- setup/common.sh@19 -- # local var val 00:04:05.489 07:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.489 07:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.489 07:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.489 07:19:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.489 07:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.489 07:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929492 kB' 'MemAvailable: 9485472 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467060 kB' 'Inactive: 1422008 kB' 'Active(anon): 127852 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119020 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161832 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98452 kB' 'KernelStack: 6608 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.489 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.489 07:19:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.490 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.490 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.490 07:19:14 -- setup/common.sh@33 -- # echo 0 00:04:05.490 07:19:14 -- setup/common.sh@33 -- # return 0 00:04:05.490 nr_hugepages=1024 00:04:05.490 resv_hugepages=0 00:04:05.490 surplus_hugepages=0 00:04:05.490 anon_hugepages=0 00:04:05.490 07:19:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.490 07:19:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.490 07:19:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.490 07:19:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.490 07:19:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.490 07:19:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.490 07:19:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.490 07:19:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.490 07:19:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.490 07:19:14 -- setup/common.sh@18 -- # local node= 00:04:05.490 07:19:14 -- setup/common.sh@19 -- # local var val 00:04:05.490 07:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.490 07:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.490 07:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.491 07:19:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.491 07:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.491 07:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.491 07:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929492 kB' 'MemAvailable: 9485472 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466768 kB' 'Inactive: 1422008 kB' 'Active(anon): 127560 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161828 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98448 kB' 'KernelStack: 6576 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.491 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.491 07:19:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.492 07:19:14 -- setup/common.sh@33 -- # echo 1024 00:04:05.492 07:19:14 -- setup/common.sh@33 -- # return 0 00:04:05.492 07:19:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.492 07:19:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.492 07:19:14 -- setup/hugepages.sh@27 -- # local node 00:04:05.492 07:19:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.492 07:19:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.492 07:19:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.492 07:19:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.492 07:19:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.492 07:19:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.492 07:19:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.492 07:19:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.492 07:19:14 -- setup/common.sh@18 -- # local node=0 00:04:05.492 07:19:14 -- setup/common.sh@19 -- # local var val 00:04:05.492 07:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.492 07:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.492 07:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.492 07:19:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.492 07:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.492 07:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929492 kB' 'MemUsed: 4307604 kB' 'SwapCached: 0 kB' 'Active: 466800 kB' 'Inactive: 1422008 kB' 'Active(anon): 127592 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50820 kB' 'AnonPages: 118672 kB' 'Shmem: 10492 kB' 'KernelStack: 6576 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63380 kB' 'Slab: 161828 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.492 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.492 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # continue 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.493 07:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.493 07:19:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.493 07:19:14 -- setup/common.sh@33 -- # echo 0 00:04:05.493 07:19:14 -- setup/common.sh@33 -- # return 0 00:04:05.493 07:19:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.493 07:19:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.493 07:19:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.493 07:19:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.493 07:19:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.493 node0=1024 expecting 1024 00:04:05.493 07:19:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.493 00:04:05.493 real 0m1.292s 00:04:05.493 user 0m0.515s 00:04:05.493 sys 0m0.602s 00:04:05.493 07:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.493 07:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.493 ************************************ 00:04:05.493 END TEST default_setup 00:04:05.493 ************************************ 00:04:05.754 07:19:14 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:05.754 07:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.754 07:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.754 07:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.754 ************************************ 00:04:05.754 START TEST per_node_1G_alloc 00:04:05.754 ************************************ 00:04:05.754 07:19:14 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:05.754 07:19:14 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:05.754 07:19:14 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:05.754 07:19:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.754 07:19:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.754 07:19:14 -- setup/hugepages.sh@51 -- # shift 00:04:05.754 07:19:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.754 07:19:14 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.755 07:19:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.755 07:19:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.755 07:19:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.755 07:19:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.755 07:19:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.755 07:19:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.755 07:19:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.755 07:19:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.755 07:19:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.755 07:19:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.755 07:19:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.755 07:19:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.755 07:19:14 -- setup/hugepages.sh@73 -- # return 0 00:04:05.755 07:19:14 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:05.755 07:19:14 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:05.755 07:19:14 -- setup/hugepages.sh@146 -- # setup output 00:04:05.755 07:19:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.755 07:19:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.016 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.016 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.016 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.016 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.016 07:19:15 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:06.016 07:19:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:06.016 07:19:15 -- setup/hugepages.sh@89 -- # local node 00:04:06.016 07:19:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.016 07:19:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.016 07:19:15 -- setup/hugepages.sh@92 -- # local surp 00:04:06.016 07:19:15 -- setup/hugepages.sh@93 -- # local resv 00:04:06.016 07:19:15 -- setup/hugepages.sh@94 -- # local anon 00:04:06.016 07:19:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.016 07:19:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.016 07:19:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.016 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.016 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.016 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.016 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.016 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.016 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.016 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.016 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.016 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.016 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.016 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8982552 kB' 'MemAvailable: 10538536 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467388 kB' 'Inactive: 1422012 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119228 kB' 'Mapped: 50916 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161860 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98480 kB' 'KernelStack: 6588 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.016 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.016 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.016 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.281 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.281 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.282 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.282 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.282 07:19:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.282 07:19:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.282 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.282 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.282 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.282 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.282 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.282 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.282 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.282 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.282 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8982696 kB' 'MemAvailable: 10538680 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467076 kB' 'Inactive: 1422012 kB' 'Active(anon): 127868 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118916 kB' 'Mapped: 50764 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161892 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98512 kB' 'KernelStack: 6608 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.282 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.282 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.283 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.283 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.283 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.283 07:19:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.283 07:19:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.283 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.283 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.283 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.283 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.283 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.283 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.283 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.283 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.283 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.283 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8982696 kB' 'MemAvailable: 10538680 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466968 kB' 'Inactive: 1422012 kB' 'Active(anon): 127760 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118812 kB' 'Mapped: 50764 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161888 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98508 kB' 'KernelStack: 6560 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.284 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.284 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.285 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.285 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.285 nr_hugepages=512 00:04:06.285 resv_hugepages=0 00:04:06.285 surplus_hugepages=0 00:04:06.285 anon_hugepages=0 00:04:06.285 07:19:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.285 07:19:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:06.285 07:19:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.285 07:19:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.285 07:19:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.285 07:19:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.285 07:19:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:06.285 07:19:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.285 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.285 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.285 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.285 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.285 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.285 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.285 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.285 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.285 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8982696 kB' 'MemAvailable: 10538680 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466916 kB' 'Inactive: 1422012 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118792 kB' 'Mapped: 50764 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161888 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98508 kB' 'KernelStack: 6596 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.285 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.285 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.286 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.286 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.286 07:19:15 -- setup/common.sh@33 -- # echo 512 00:04:06.286 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.287 07:19:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.287 07:19:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.287 07:19:15 -- setup/hugepages.sh@27 -- # local node 00:04:06.287 07:19:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.287 07:19:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.287 07:19:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.287 07:19:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.287 07:19:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.287 07:19:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.287 07:19:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.287 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.287 07:19:15 -- setup/common.sh@18 -- # local node=0 00:04:06.287 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.287 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.287 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.287 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.287 07:19:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.287 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.287 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8982444 kB' 'MemUsed: 3254652 kB' 'SwapCached: 0 kB' 'Active: 467024 kB' 'Inactive: 1422012 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50820 kB' 'AnonPages: 118896 kB' 'Shmem: 10492 kB' 'KernelStack: 6576 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63380 kB' 'Slab: 161900 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.287 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.287 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.288 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.288 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.288 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.288 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.288 07:19:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.288 07:19:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.288 07:19:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.288 07:19:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.288 node0=512 expecting 512 00:04:06.288 07:19:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.288 ************************************ 00:04:06.288 END TEST per_node_1G_alloc 00:04:06.288 ************************************ 00:04:06.288 00:04:06.288 real 0m0.599s 00:04:06.288 user 0m0.264s 00:04:06.288 sys 0m0.340s 00:04:06.288 07:19:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.288 07:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:06.288 07:19:15 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:06.288 07:19:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.288 07:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.288 07:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:06.288 ************************************ 00:04:06.288 START TEST even_2G_alloc 00:04:06.288 ************************************ 00:04:06.288 07:19:15 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:06.288 07:19:15 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:06.288 07:19:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.288 07:19:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.288 07:19:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.288 07:19:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.288 07:19:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.288 07:19:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.288 07:19:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.288 07:19:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.288 07:19:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.288 07:19:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:06.288 07:19:15 -- setup/hugepages.sh@83 -- # : 0 00:04:06.288 07:19:15 -- setup/hugepages.sh@84 -- # : 0 00:04:06.288 07:19:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.288 07:19:15 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:06.288 07:19:15 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:06.288 07:19:15 -- setup/hugepages.sh@153 -- # setup output 00:04:06.288 07:19:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.288 07:19:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.866 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.866 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.866 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.866 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.866 07:19:15 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:06.866 07:19:15 -- setup/hugepages.sh@89 -- # local node 00:04:06.866 07:19:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.866 07:19:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.866 07:19:15 -- setup/hugepages.sh@92 -- # local surp 00:04:06.866 07:19:15 -- setup/hugepages.sh@93 -- # local resv 00:04:06.866 07:19:15 -- setup/hugepages.sh@94 -- # local anon 00:04:06.866 07:19:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.866 07:19:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.866 07:19:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.866 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.866 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.866 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.866 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.866 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.866 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.866 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.866 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.866 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932088 kB' 'MemAvailable: 9488072 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466948 kB' 'Inactive: 1422012 kB' 'Active(anon): 127740 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 50948 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161860 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98480 kB' 'KernelStack: 6580 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.866 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.866 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.867 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.867 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.867 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.867 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.867 07:19:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.867 07:19:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.867 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.867 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.867 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.867 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.867 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.867 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.867 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.868 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.868 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932088 kB' 'MemAvailable: 9488072 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466892 kB' 'Inactive: 1422012 kB' 'Active(anon): 127684 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161884 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98504 kB' 'KernelStack: 6608 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.868 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.868 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.869 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.869 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.869 07:19:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.869 07:19:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.869 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.869 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.869 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.869 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.869 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.869 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.869 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.869 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.869 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932088 kB' 'MemAvailable: 9488072 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466920 kB' 'Inactive: 1422012 kB' 'Active(anon): 127712 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161884 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98504 kB' 'KernelStack: 6592 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.869 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.869 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.870 07:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.870 07:19:15 -- setup/common.sh@33 -- # echo 0 00:04:06.870 07:19:15 -- setup/common.sh@33 -- # return 0 00:04:06.870 nr_hugepages=1024 00:04:06.870 resv_hugepages=0 00:04:06.870 surplus_hugepages=0 00:04:06.870 07:19:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.870 07:19:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.870 07:19:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.870 07:19:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.870 anon_hugepages=0 00:04:06.870 07:19:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.870 07:19:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.870 07:19:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.870 07:19:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.870 07:19:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.870 07:19:15 -- setup/common.sh@18 -- # local node= 00:04:06.870 07:19:15 -- setup/common.sh@19 -- # local var val 00:04:06.870 07:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.870 07:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.870 07:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.870 07:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.870 07:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.870 07:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.870 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932088 kB' 'MemAvailable: 9488072 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466884 kB' 'Inactive: 1422012 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118756 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161884 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98504 kB' 'KernelStack: 6576 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:15 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.871 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.871 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.872 07:19:16 -- setup/common.sh@33 -- # echo 1024 00:04:06.872 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:06.872 07:19:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.872 07:19:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.872 07:19:16 -- setup/hugepages.sh@27 -- # local node 00:04:06.872 07:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.872 07:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.872 07:19:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.872 07:19:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.872 07:19:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.872 07:19:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.872 07:19:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.872 07:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.872 07:19:16 -- setup/common.sh@18 -- # local node=0 00:04:06.872 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:06.872 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.872 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.872 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.872 07:19:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.872 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.872 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932088 kB' 'MemUsed: 4305008 kB' 'SwapCached: 0 kB' 'Active: 466888 kB' 'Inactive: 1422012 kB' 'Active(anon): 127680 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50820 kB' 'AnonPages: 118796 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63380 kB' 'Slab: 161876 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.872 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.872 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # continue 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.873 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.873 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.873 07:19:16 -- setup/common.sh@33 -- # echo 0 00:04:06.873 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:06.873 07:19:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.873 node0=1024 expecting 1024 00:04:06.873 ************************************ 00:04:06.873 END TEST even_2G_alloc 00:04:06.873 ************************************ 00:04:06.873 07:19:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.873 07:19:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.873 07:19:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.873 07:19:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.873 00:04:06.873 real 0m0.585s 00:04:06.873 user 0m0.257s 00:04:06.873 sys 0m0.333s 00:04:06.873 07:19:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.873 07:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:06.873 07:19:16 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:06.873 07:19:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.873 07:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.873 07:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:06.873 ************************************ 00:04:06.873 START TEST odd_alloc 00:04:06.873 ************************************ 00:04:06.873 07:19:16 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:06.873 07:19:16 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:06.873 07:19:16 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:06.873 07:19:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:06.873 07:19:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.873 07:19:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.873 07:19:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.873 07:19:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:06.873 07:19:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.873 07:19:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.873 07:19:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.873 07:19:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:06.873 07:19:16 -- setup/hugepages.sh@83 -- # : 0 00:04:06.873 07:19:16 -- setup/hugepages.sh@84 -- # : 0 00:04:06.873 07:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.873 07:19:16 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:06.873 07:19:16 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:06.873 07:19:16 -- setup/hugepages.sh@160 -- # setup output 00:04:06.873 07:19:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.873 07:19:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.450 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.450 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.450 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.450 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.450 07:19:16 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:07.450 07:19:16 -- setup/hugepages.sh@89 -- # local node 00:04:07.450 07:19:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.450 07:19:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.450 07:19:16 -- setup/hugepages.sh@92 -- # local surp 00:04:07.450 07:19:16 -- setup/hugepages.sh@93 -- # local resv 00:04:07.450 07:19:16 -- setup/hugepages.sh@94 -- # local anon 00:04:07.450 07:19:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.450 07:19:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.450 07:19:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.450 07:19:16 -- setup/common.sh@18 -- # local node= 00:04:07.450 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:07.450 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.450 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.450 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.450 07:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.450 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.450 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932264 kB' 'MemAvailable: 9488248 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467128 kB' 'Inactive: 1422012 kB' 'Active(anon): 127920 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 50900 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161756 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98376 kB' 'KernelStack: 6640 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.450 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.450 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.451 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.451 07:19:16 -- setup/common.sh@33 -- # echo 0 00:04:07.451 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:07.451 07:19:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.451 07:19:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.451 07:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.451 07:19:16 -- setup/common.sh@18 -- # local node= 00:04:07.451 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:07.451 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.451 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.451 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.451 07:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.451 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.451 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.451 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932264 kB' 'MemAvailable: 9488248 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466912 kB' 'Inactive: 1422012 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118832 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161740 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98360 kB' 'KernelStack: 6608 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.452 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.452 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.453 07:19:16 -- setup/common.sh@33 -- # echo 0 00:04:07.453 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:07.453 07:19:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.453 07:19:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.453 07:19:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.453 07:19:16 -- setup/common.sh@18 -- # local node= 00:04:07.453 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:07.453 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.453 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.453 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.453 07:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.453 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.453 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932264 kB' 'MemAvailable: 9488248 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466872 kB' 'Inactive: 1422012 kB' 'Active(anon): 127664 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161736 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98356 kB' 'KernelStack: 6592 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.453 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.453 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.454 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.454 07:19:16 -- setup/common.sh@33 -- # echo 0 00:04:07.454 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:07.454 07:19:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.454 07:19:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:07.454 nr_hugepages=1025 00:04:07.454 resv_hugepages=0 00:04:07.454 surplus_hugepages=0 00:04:07.454 anon_hugepages=0 00:04:07.454 07:19:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.454 07:19:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.454 07:19:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.454 07:19:16 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:07.454 07:19:16 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:07.454 07:19:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.454 07:19:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.454 07:19:16 -- setup/common.sh@18 -- # local node= 00:04:07.454 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:07.454 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.454 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.454 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.454 07:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.454 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.454 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.454 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932264 kB' 'MemAvailable: 9488248 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466928 kB' 'Inactive: 1422012 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161732 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98352 kB' 'KernelStack: 6592 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.455 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.455 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.456 07:19:16 -- setup/common.sh@33 -- # echo 1025 00:04:07.456 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:07.456 07:19:16 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:07.456 07:19:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.456 07:19:16 -- setup/hugepages.sh@27 -- # local node 00:04:07.456 07:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.456 07:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:07.456 07:19:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.456 07:19:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.456 07:19:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.456 07:19:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.456 07:19:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.456 07:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.456 07:19:16 -- setup/common.sh@18 -- # local node=0 00:04:07.456 07:19:16 -- setup/common.sh@19 -- # local var val 00:04:07.456 07:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.456 07:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.456 07:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.456 07:19:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.456 07:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.456 07:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932264 kB' 'MemUsed: 4304832 kB' 'SwapCached: 0 kB' 'Active: 466800 kB' 'Inactive: 1422012 kB' 'Active(anon): 127592 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50820 kB' 'AnonPages: 118672 kB' 'Shmem: 10492 kB' 'KernelStack: 6576 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63380 kB' 'Slab: 161732 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.456 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.456 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.718 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.718 07:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.718 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.718 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.718 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.718 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.718 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # continue 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.719 07:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.719 07:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.719 07:19:16 -- setup/common.sh@33 -- # echo 0 00:04:07.719 07:19:16 -- setup/common.sh@33 -- # return 0 00:04:07.719 07:19:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.719 07:19:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.719 node0=1025 expecting 1025 00:04:07.719 07:19:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.719 07:19:16 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:07.719 07:19:16 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:07.719 00:04:07.719 real 0m0.600s 00:04:07.719 user 0m0.262s 00:04:07.719 sys 0m0.343s 00:04:07.719 ************************************ 00:04:07.719 END TEST odd_alloc 00:04:07.719 ************************************ 00:04:07.719 07:19:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.719 07:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 07:19:16 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:07.719 07:19:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.719 07:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.719 07:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 ************************************ 00:04:07.719 START TEST custom_alloc 00:04:07.719 ************************************ 00:04:07.719 07:19:16 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:07.719 07:19:16 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:07.719 07:19:16 -- setup/hugepages.sh@169 -- # local node 00:04:07.719 07:19:16 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:07.719 07:19:16 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:07.719 07:19:16 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:07.719 07:19:16 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:07.719 07:19:16 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:07.719 07:19:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:07.719 07:19:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.719 07:19:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.719 07:19:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.719 07:19:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.719 07:19:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.719 07:19:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.719 07:19:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.719 07:19:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.719 07:19:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.719 07:19:16 -- setup/hugepages.sh@83 -- # : 0 00:04:07.719 07:19:16 -- setup/hugepages.sh@84 -- # : 0 00:04:07.720 07:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.720 07:19:16 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:07.720 07:19:16 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:07.720 07:19:16 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:07.720 07:19:16 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:07.720 07:19:16 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:07.720 07:19:16 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:07.720 07:19:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.720 07:19:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.720 07:19:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.720 07:19:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.720 07:19:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.720 07:19:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.720 07:19:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.720 07:19:16 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:07.720 07:19:16 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:07.720 07:19:16 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:07.720 07:19:16 -- setup/hugepages.sh@78 -- # return 0 00:04:07.720 07:19:16 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:07.720 07:19:16 -- setup/hugepages.sh@187 -- # setup output 00:04:07.720 07:19:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.720 07:19:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.981 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.981 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.981 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.981 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.246 07:19:17 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:08.246 07:19:17 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:08.246 07:19:17 -- setup/hugepages.sh@89 -- # local node 00:04:08.246 07:19:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.246 07:19:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.246 07:19:17 -- setup/hugepages.sh@92 -- # local surp 00:04:08.246 07:19:17 -- setup/hugepages.sh@93 -- # local resv 00:04:08.246 07:19:17 -- setup/hugepages.sh@94 -- # local anon 00:04:08.246 07:19:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.246 07:19:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.246 07:19:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.246 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.246 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.246 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.246 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.246 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.246 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.246 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.246 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.246 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.246 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8992060 kB' 'MemAvailable: 10548044 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 467264 kB' 'Inactive: 1422012 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119164 kB' 'Mapped: 51028 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161536 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98156 kB' 'KernelStack: 6628 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.247 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.247 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.248 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.248 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.248 07:19:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.248 07:19:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.248 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.248 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.248 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.248 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.248 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.248 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.248 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.248 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.248 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8996588 kB' 'MemAvailable: 10552572 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466864 kB' 'Inactive: 1422012 kB' 'Active(anon): 127656 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118992 kB' 'Mapped: 50828 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161496 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98116 kB' 'KernelStack: 6608 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.248 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.248 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.249 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.249 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.249 07:19:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.249 07:19:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.249 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.249 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.249 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.249 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.249 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.249 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.249 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.249 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.249 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8996656 kB' 'MemAvailable: 10552640 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466544 kB' 'Inactive: 1422012 kB' 'Active(anon): 127336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161488 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98108 kB' 'KernelStack: 6560 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.249 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.249 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.250 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.250 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.250 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.250 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.250 07:19:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.251 nr_hugepages=512 00:04:08.251 resv_hugepages=0 00:04:08.251 surplus_hugepages=0 00:04:08.251 anon_hugepages=0 00:04:08.251 07:19:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:08.251 07:19:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.251 07:19:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.251 07:19:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.251 07:19:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:08.251 07:19:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:08.251 07:19:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.251 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.251 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.251 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.251 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.251 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.251 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.251 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.251 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.251 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8996408 kB' 'MemAvailable: 10552392 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466512 kB' 'Inactive: 1422012 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118624 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63380 kB' 'Slab: 161484 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98104 kB' 'KernelStack: 6544 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.251 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.251 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.252 07:19:17 -- setup/common.sh@33 -- # echo 512 00:04:08.252 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.252 07:19:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:08.252 07:19:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.252 07:19:17 -- setup/hugepages.sh@27 -- # local node 00:04:08.252 07:19:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.252 07:19:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.252 07:19:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.252 07:19:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.252 07:19:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.252 07:19:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.252 07:19:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.252 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.252 07:19:17 -- setup/common.sh@18 -- # local node=0 00:04:08.252 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.252 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.252 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.252 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.252 07:19:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.252 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.252 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8996408 kB' 'MemUsed: 3240688 kB' 'SwapCached: 0 kB' 'Active: 466860 kB' 'Inactive: 1422012 kB' 'Active(anon): 127652 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50820 kB' 'AnonPages: 118732 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63380 kB' 'Slab: 161484 kB' 'SReclaimable: 63380 kB' 'SUnreclaim: 98104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.252 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.252 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.253 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.253 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.253 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.253 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.253 07:19:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.253 node0=512 expecting 512 00:04:08.253 07:19:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.253 07:19:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.253 07:19:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.253 07:19:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.253 07:19:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.253 00:04:08.253 real 0m0.584s 00:04:08.253 user 0m0.221s 00:04:08.253 sys 0m0.366s 00:04:08.253 07:19:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.253 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:08.253 ************************************ 00:04:08.253 END TEST custom_alloc 00:04:08.253 ************************************ 00:04:08.253 07:19:17 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:08.253 07:19:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.253 07:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.253 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:08.253 ************************************ 00:04:08.253 START TEST no_shrink_alloc 00:04:08.253 ************************************ 00:04:08.253 07:19:17 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:08.253 07:19:17 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:08.253 07:19:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.253 07:19:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.253 07:19:17 -- setup/hugepages.sh@51 -- # shift 00:04:08.253 07:19:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.253 07:19:17 -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.253 07:19:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.253 07:19:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.253 07:19:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.253 07:19:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.253 07:19:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.253 07:19:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.253 07:19:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:08.253 07:19:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.254 07:19:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.254 07:19:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.254 07:19:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.254 07:19:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.254 07:19:17 -- setup/hugepages.sh@73 -- # return 0 00:04:08.254 07:19:17 -- setup/hugepages.sh@198 -- # setup output 00:04:08.254 07:19:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.254 07:19:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.830 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.830 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.830 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.830 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.830 07:19:17 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:08.830 07:19:17 -- setup/hugepages.sh@89 -- # local node 00:04:08.830 07:19:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.830 07:19:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.830 07:19:17 -- setup/hugepages.sh@92 -- # local surp 00:04:08.830 07:19:17 -- setup/hugepages.sh@93 -- # local resv 00:04:08.830 07:19:17 -- setup/hugepages.sh@94 -- # local anon 00:04:08.830 07:19:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.830 07:19:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.830 07:19:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.830 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.830 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.830 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.830 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.830 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.830 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.830 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.830 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946348 kB' 'MemAvailable: 9502328 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465784 kB' 'Inactive: 1422012 kB' 'Active(anon): 126576 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117684 kB' 'Mapped: 50176 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161512 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98140 kB' 'KernelStack: 6512 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.830 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.830 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.831 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.831 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.831 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.831 07:19:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.831 07:19:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.831 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.831 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.831 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.831 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.831 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.831 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.831 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.831 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.831 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.831 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946348 kB' 'MemAvailable: 9502328 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465432 kB' 'Inactive: 1422012 kB' 'Active(anon): 126224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117356 kB' 'Mapped: 49972 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161444 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98072 kB' 'KernelStack: 6528 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.832 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.832 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.833 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.833 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.833 07:19:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.833 07:19:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.833 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.833 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.833 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.833 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.833 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.833 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.833 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.833 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.833 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946348 kB' 'MemAvailable: 9502328 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465176 kB' 'Inactive: 1422012 kB' 'Active(anon): 125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117096 kB' 'Mapped: 49972 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161444 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98072 kB' 'KernelStack: 6528 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.833 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.833 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.834 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.834 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.834 07:19:17 -- setup/common.sh@33 -- # echo 0 00:04:08.834 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.834 nr_hugepages=1024 00:04:08.834 resv_hugepages=0 00:04:08.834 surplus_hugepages=0 00:04:08.834 anon_hugepages=0 00:04:08.834 07:19:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.834 07:19:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.834 07:19:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.834 07:19:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.834 07:19:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.834 07:19:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.834 07:19:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.834 07:19:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.835 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.835 07:19:17 -- setup/common.sh@18 -- # local node= 00:04:08.835 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.835 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.835 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.835 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.835 07:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.835 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.835 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946348 kB' 'MemAvailable: 9502328 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465404 kB' 'Inactive: 1422012 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117324 kB' 'Mapped: 49972 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161444 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98072 kB' 'KernelStack: 6512 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.835 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.835 07:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.836 07:19:17 -- setup/common.sh@33 -- # echo 1024 00:04:08.836 07:19:17 -- setup/common.sh@33 -- # return 0 00:04:08.836 07:19:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.836 07:19:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.836 07:19:17 -- setup/hugepages.sh@27 -- # local node 00:04:08.836 07:19:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.836 07:19:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.836 07:19:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.836 07:19:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.836 07:19:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.836 07:19:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.836 07:19:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.836 07:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.836 07:19:17 -- setup/common.sh@18 -- # local node=0 00:04:08.836 07:19:17 -- setup/common.sh@19 -- # local var val 00:04:08.836 07:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.836 07:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.836 07:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.836 07:19:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.836 07:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.836 07:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946348 kB' 'MemUsed: 4290748 kB' 'SwapCached: 0 kB' 'Active: 465372 kB' 'Inactive: 1422012 kB' 'Active(anon): 126164 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 49972 kB' 'AnonPages: 117304 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63372 kB' 'Slab: 161444 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.836 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.836 07:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:17 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # continue 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.837 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.837 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.837 07:19:18 -- setup/common.sh@33 -- # echo 0 00:04:08.837 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:08.837 07:19:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.837 07:19:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.837 07:19:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.837 07:19:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.837 07:19:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.837 node0=1024 expecting 1024 00:04:08.837 07:19:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.837 07:19:18 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:08.837 07:19:18 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:08.837 07:19:18 -- setup/hugepages.sh@202 -- # setup output 00:04:08.837 07:19:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.837 07:19:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.410 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.410 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.410 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.410 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.410 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.410 07:19:18 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.410 07:19:18 -- setup/hugepages.sh@89 -- # local node 00:04:09.410 07:19:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.410 07:19:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.410 07:19:18 -- setup/hugepages.sh@92 -- # local surp 00:04:09.410 07:19:18 -- setup/hugepages.sh@93 -- # local resv 00:04:09.410 07:19:18 -- setup/hugepages.sh@94 -- # local anon 00:04:09.410 07:19:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.410 07:19:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.410 07:19:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.410 07:19:18 -- setup/common.sh@18 -- # local node= 00:04:09.410 07:19:18 -- setup/common.sh@19 -- # local var val 00:04:09.410 07:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.410 07:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.410 07:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.410 07:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.410 07:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.410 07:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.410 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7946660 kB' 'MemAvailable: 9502640 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 466144 kB' 'Inactive: 1422012 kB' 'Active(anon): 126936 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118052 kB' 'Mapped: 50200 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161352 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 97980 kB' 'KernelStack: 6708 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.411 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.411 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.412 07:19:18 -- setup/common.sh@33 -- # echo 0 00:04:09.412 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:09.412 07:19:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.412 07:19:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.412 07:19:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.412 07:19:18 -- setup/common.sh@18 -- # local node= 00:04:09.412 07:19:18 -- setup/common.sh@19 -- # local var val 00:04:09.412 07:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.412 07:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.412 07:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.412 07:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.412 07:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.412 07:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.412 07:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7949808 kB' 'MemAvailable: 9505788 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465728 kB' 'Inactive: 1422012 kB' 'Active(anon): 126520 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117632 kB' 'Mapped: 50208 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161388 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 98016 kB' 'KernelStack: 6648 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.412 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.412 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.413 07:19:18 -- setup/common.sh@33 -- # echo 0 00:04:09.413 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:09.413 07:19:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.413 07:19:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.413 07:19:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.413 07:19:18 -- setup/common.sh@18 -- # local node= 00:04:09.413 07:19:18 -- setup/common.sh@19 -- # local var val 00:04:09.413 07:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.413 07:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.413 07:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.413 07:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.413 07:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.413 07:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7949852 kB' 'MemAvailable: 9505832 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465548 kB' 'Inactive: 1422012 kB' 'Active(anon): 126340 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117684 kB' 'Mapped: 50132 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161340 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 97968 kB' 'KernelStack: 6616 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.413 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.413 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.414 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.414 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.414 07:19:18 -- setup/common.sh@33 -- # echo 0 00:04:09.414 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:09.414 nr_hugepages=1024 00:04:09.414 resv_hugepages=0 00:04:09.414 surplus_hugepages=0 00:04:09.414 anon_hugepages=0 00:04:09.415 07:19:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.415 07:19:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.415 07:19:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.415 07:19:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.415 07:19:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.415 07:19:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.415 07:19:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.415 07:19:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.415 07:19:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.415 07:19:18 -- setup/common.sh@18 -- # local node= 00:04:09.415 07:19:18 -- setup/common.sh@19 -- # local var val 00:04:09.415 07:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.415 07:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.415 07:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.415 07:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.415 07:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.415 07:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7949096 kB' 'MemAvailable: 9505076 kB' 'Buffers: 2684 kB' 'Cached: 1769028 kB' 'SwapCached: 0 kB' 'Active: 465388 kB' 'Inactive: 1422012 kB' 'Active(anon): 126180 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117216 kB' 'Mapped: 50068 kB' 'Shmem: 10492 kB' 'KReclaimable: 63372 kB' 'Slab: 161336 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 97964 kB' 'KernelStack: 6552 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.415 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.415 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.416 07:19:18 -- setup/common.sh@33 -- # echo 1024 00:04:09.416 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:09.416 07:19:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.416 07:19:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.416 07:19:18 -- setup/hugepages.sh@27 -- # local node 00:04:09.416 07:19:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.416 07:19:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.416 07:19:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.416 07:19:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.416 07:19:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.416 07:19:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.416 07:19:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.416 07:19:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.416 07:19:18 -- setup/common.sh@18 -- # local node=0 00:04:09.416 07:19:18 -- setup/common.sh@19 -- # local var val 00:04:09.416 07:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.416 07:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.416 07:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.416 07:19:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.416 07:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.416 07:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7949096 kB' 'MemUsed: 4288000 kB' 'SwapCached: 0 kB' 'Active: 465648 kB' 'Inactive: 1422012 kB' 'Active(anon): 126440 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771712 kB' 'Mapped: 50068 kB' 'AnonPages: 117476 kB' 'Shmem: 10492 kB' 'KernelStack: 6620 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63372 kB' 'Slab: 161336 kB' 'SReclaimable: 63372 kB' 'SUnreclaim: 97964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.416 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.416 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # continue 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.417 07:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.417 07:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.417 07:19:18 -- setup/common.sh@33 -- # echo 0 00:04:09.417 07:19:18 -- setup/common.sh@33 -- # return 0 00:04:09.417 07:19:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.417 07:19:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.417 07:19:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.417 node0=1024 expecting 1024 00:04:09.417 ************************************ 00:04:09.417 END TEST no_shrink_alloc 00:04:09.417 ************************************ 00:04:09.417 07:19:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.417 07:19:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.417 07:19:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.417 00:04:09.417 real 0m1.149s 00:04:09.417 user 0m0.480s 00:04:09.417 sys 0m0.680s 00:04:09.417 07:19:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.417 07:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:09.417 07:19:18 -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.417 07:19:18 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.417 07:19:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.417 07:19:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.417 07:19:18 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.417 07:19:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.417 07:19:18 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.417 07:19:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.417 07:19:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.417 ************************************ 00:04:09.417 END TEST hugepages 00:04:09.417 ************************************ 00:04:09.417 00:04:09.417 real 0m5.393s 00:04:09.417 user 0m2.189s 00:04:09.417 sys 0m2.902s 00:04:09.417 07:19:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.417 07:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:09.678 07:19:18 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.678 07:19:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.678 07:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:09.678 ************************************ 00:04:09.678 START TEST driver 00:04:09.678 ************************************ 00:04:09.678 07:19:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.678 * Looking for test storage... 00:04:09.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.678 07:19:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:09.678 07:19:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:09.678 07:19:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:09.678 07:19:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:09.678 07:19:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:09.678 07:19:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:09.678 07:19:18 -- scripts/common.sh@335 -- # IFS=.-: 00:04:09.678 07:19:18 -- scripts/common.sh@335 -- # read -ra ver1 00:04:09.678 07:19:18 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.678 07:19:18 -- scripts/common.sh@336 -- # read -ra ver2 00:04:09.678 07:19:18 -- scripts/common.sh@337 -- # local 'op=<' 00:04:09.678 07:19:18 -- scripts/common.sh@339 -- # ver1_l=2 00:04:09.678 07:19:18 -- scripts/common.sh@340 -- # ver2_l=1 00:04:09.678 07:19:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:09.678 07:19:18 -- scripts/common.sh@343 -- # case "$op" in 00:04:09.678 07:19:18 -- scripts/common.sh@344 -- # : 1 00:04:09.678 07:19:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:09.678 07:19:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.678 07:19:18 -- scripts/common.sh@364 -- # decimal 1 00:04:09.678 07:19:18 -- scripts/common.sh@352 -- # local d=1 00:04:09.678 07:19:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.678 07:19:18 -- scripts/common.sh@354 -- # echo 1 00:04:09.678 07:19:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:09.678 07:19:18 -- scripts/common.sh@365 -- # decimal 2 00:04:09.678 07:19:18 -- scripts/common.sh@352 -- # local d=2 00:04:09.678 07:19:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.678 07:19:18 -- scripts/common.sh@354 -- # echo 2 00:04:09.678 07:19:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.678 07:19:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.678 07:19:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.678 07:19:18 -- scripts/common.sh@367 -- # return 0 00:04:09.678 07:19:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.678 --rc genhtml_branch_coverage=1 00:04:09.678 --rc genhtml_function_coverage=1 00:04:09.678 --rc genhtml_legend=1 00:04:09.678 --rc geninfo_all_blocks=1 00:04:09.678 --rc geninfo_unexecuted_blocks=1 00:04:09.678 00:04:09.678 ' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.678 --rc genhtml_branch_coverage=1 00:04:09.678 --rc genhtml_function_coverage=1 00:04:09.678 --rc genhtml_legend=1 00:04:09.678 --rc geninfo_all_blocks=1 00:04:09.678 --rc geninfo_unexecuted_blocks=1 00:04:09.678 00:04:09.678 ' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.678 --rc genhtml_branch_coverage=1 00:04:09.678 --rc genhtml_function_coverage=1 00:04:09.678 --rc genhtml_legend=1 00:04:09.678 --rc geninfo_all_blocks=1 00:04:09.678 --rc geninfo_unexecuted_blocks=1 00:04:09.678 00:04:09.678 ' 00:04:09.678 07:19:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.678 --rc genhtml_branch_coverage=1 00:04:09.678 --rc genhtml_function_coverage=1 00:04:09.678 --rc genhtml_legend=1 00:04:09.678 --rc geninfo_all_blocks=1 00:04:09.678 --rc geninfo_unexecuted_blocks=1 00:04:09.678 00:04:09.678 ' 00:04:09.678 07:19:18 -- setup/driver.sh@68 -- # setup reset 00:04:09.678 07:19:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.678 07:19:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.315 07:19:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.315 07:19:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.315 07:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.315 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.315 ************************************ 00:04:16.315 START TEST guess_driver 00:04:16.315 ************************************ 00:04:16.315 07:19:24 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:16.315 07:19:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.315 07:19:24 -- setup/driver.sh@47 -- # local fail=0 00:04:16.315 07:19:24 -- setup/driver.sh@49 -- # pick_driver 00:04:16.315 07:19:24 -- setup/driver.sh@36 -- # vfio 00:04:16.315 07:19:24 -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.315 07:19:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.315 07:19:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.315 07:19:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.315 07:19:24 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:16.315 07:19:24 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:16.315 07:19:24 -- setup/driver.sh@32 -- # return 1 00:04:16.315 07:19:24 -- setup/driver.sh@38 -- # uio 00:04:16.315 07:19:24 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:16.315 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:16.315 07:19:24 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:16.315 Looking for driver=uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:16.315 07:19:24 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.315 07:19:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:16.315 07:19:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.315 07:19:24 -- setup/driver.sh@45 -- # setup output config 00:04:16.315 07:19:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.315 07:19:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.574 07:19:25 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:16.574 07:19:25 -- setup/driver.sh@58 -- # continue 00:04:16.574 07:19:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.574 07:19:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.574 07:19:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.574 07:19:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.574 07:19:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.574 07:19:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.574 07:19:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.574 07:19:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.574 07:19:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.574 07:19:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.831 07:19:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.831 07:19:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.831 07:19:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.831 07:19:25 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.831 07:19:25 -- setup/driver.sh@65 -- # setup reset 00:04:16.831 07:19:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.831 07:19:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.437 00:04:23.437 real 0m6.981s 00:04:23.437 user 0m0.627s 00:04:23.437 sys 0m1.244s 00:04:23.437 07:19:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.437 ************************************ 00:04:23.437 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.437 END TEST guess_driver 00:04:23.437 ************************************ 00:04:23.437 ************************************ 00:04:23.437 END TEST driver 00:04:23.437 ************************************ 00:04:23.437 00:04:23.437 real 0m13.061s 00:04:23.437 user 0m0.982s 00:04:23.437 sys 0m2.009s 00:04:23.437 07:19:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.437 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.437 07:19:31 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:23.437 07:19:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.437 07:19:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.437 ************************************ 00:04:23.437 START TEST devices 00:04:23.437 ************************************ 00:04:23.437 07:19:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:23.437 * Looking for test storage... 00:04:23.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:23.437 07:19:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:23.437 07:19:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:23.437 07:19:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:23.437 07:19:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:23.437 07:19:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:23.437 07:19:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:23.437 07:19:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:23.437 07:19:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:23.437 07:19:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.437 07:19:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:23.437 07:19:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:23.437 07:19:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:23.437 07:19:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:23.437 07:19:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:23.437 07:19:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:23.437 07:19:31 -- scripts/common.sh@344 -- # : 1 00:04:23.437 07:19:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:23.437 07:19:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.437 07:19:31 -- scripts/common.sh@364 -- # decimal 1 00:04:23.437 07:19:31 -- scripts/common.sh@352 -- # local d=1 00:04:23.437 07:19:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.437 07:19:31 -- scripts/common.sh@354 -- # echo 1 00:04:23.437 07:19:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:23.437 07:19:31 -- scripts/common.sh@365 -- # decimal 2 00:04:23.437 07:19:31 -- scripts/common.sh@352 -- # local d=2 00:04:23.437 07:19:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.437 07:19:31 -- scripts/common.sh@354 -- # echo 2 00:04:23.437 07:19:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:23.437 07:19:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:23.437 07:19:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:23.437 07:19:31 -- scripts/common.sh@367 -- # return 0 00:04:23.437 07:19:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.437 --rc genhtml_branch_coverage=1 00:04:23.437 --rc genhtml_function_coverage=1 00:04:23.437 --rc genhtml_legend=1 00:04:23.437 --rc geninfo_all_blocks=1 00:04:23.437 --rc geninfo_unexecuted_blocks=1 00:04:23.437 00:04:23.437 ' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.437 --rc genhtml_branch_coverage=1 00:04:23.437 --rc genhtml_function_coverage=1 00:04:23.437 --rc genhtml_legend=1 00:04:23.437 --rc geninfo_all_blocks=1 00:04:23.437 --rc geninfo_unexecuted_blocks=1 00:04:23.437 00:04:23.437 ' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.437 --rc genhtml_branch_coverage=1 00:04:23.437 --rc genhtml_function_coverage=1 00:04:23.437 --rc genhtml_legend=1 00:04:23.437 --rc geninfo_all_blocks=1 00:04:23.437 --rc geninfo_unexecuted_blocks=1 00:04:23.437 00:04:23.437 ' 00:04:23.437 07:19:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.437 --rc genhtml_branch_coverage=1 00:04:23.437 --rc genhtml_function_coverage=1 00:04:23.437 --rc genhtml_legend=1 00:04:23.437 --rc geninfo_all_blocks=1 00:04:23.437 --rc geninfo_unexecuted_blocks=1 00:04:23.437 00:04:23.437 ' 00:04:23.437 07:19:31 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.437 07:19:31 -- setup/devices.sh@192 -- # setup reset 00:04:23.437 07:19:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.437 07:19:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.023 07:19:32 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.023 07:19:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:24.023 07:19:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:24.023 07:19:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.023 07:19:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:24.023 07:19:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:24.023 07:19:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.023 07:19:32 -- setup/devices.sh@196 -- # blocks=() 00:04:24.023 07:19:32 -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.023 07:19:32 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.023 07:19:32 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.023 07:19:32 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.023 07:19:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.023 07:19:32 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.023 07:19:32 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.023 07:19:32 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:24.023 07:19:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:24.023 07:19:32 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.023 07:19:32 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:24.023 07:19:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.023 No valid GPT data, bailing 00:04:24.023 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.023 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.023 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.023 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.024 07:19:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.024 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.024 07:19:33 -- setup/common.sh@80 -- # echo 1073741824 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:24.024 07:19:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:24.024 07:19:33 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:24.024 07:19:33 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:24.024 07:19:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:24.024 No valid GPT data, bailing 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.024 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:24.024 07:19:33 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:24.024 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:24.024 07:19:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:24.024 07:19:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.024 07:19:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:24.024 07:19:33 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:24.024 07:19:33 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:24.024 07:19:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:24.024 No valid GPT data, bailing 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.024 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:24.024 07:19:33 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:24.024 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:24.024 07:19:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:24.024 07:19:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.024 07:19:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:24.024 07:19:33 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:24.024 07:19:33 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:24.024 07:19:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:24.024 No valid GPT data, bailing 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:24.024 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.024 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:24.024 07:19:33 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:24.024 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:24.024 07:19:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:24.024 07:19:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.024 07:19:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:24.024 07:19:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:24.024 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:24.024 07:19:33 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:24.024 07:19:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:24.024 07:19:33 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:24.283 07:19:33 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:24.283 07:19:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:24.283 No valid GPT data, bailing 00:04:24.283 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:24.283 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.283 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.283 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:24.283 07:19:33 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:24.283 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:24.283 07:19:33 -- setup/common.sh@80 -- # echo 6343335936 00:04:24.283 07:19:33 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:24.283 07:19:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.283 07:19:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:24.283 07:19:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.283 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:24.283 07:19:33 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:24.283 07:19:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:24.283 07:19:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:24.283 07:19:33 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:24.283 07:19:33 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:24.283 07:19:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:24.283 No valid GPT data, bailing 00:04:24.283 07:19:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:24.283 07:19:33 -- scripts/common.sh@393 -- # pt= 00:04:24.283 07:19:33 -- scripts/common.sh@394 -- # return 1 00:04:24.283 07:19:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:24.283 07:19:33 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:24.283 07:19:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:24.283 07:19:33 -- setup/common.sh@80 -- # echo 5368709120 00:04:24.283 07:19:33 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:24.283 07:19:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.283 07:19:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:24.283 07:19:33 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:24.283 07:19:33 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:24.283 07:19:33 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.283 07:19:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.283 07:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.283 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:04:24.283 ************************************ 00:04:24.283 START TEST nvme_mount 00:04:24.283 ************************************ 00:04:24.283 07:19:33 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:24.283 07:19:33 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:24.283 07:19:33 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:24.283 07:19:33 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.283 07:19:33 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.283 07:19:33 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:24.283 07:19:33 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:24.283 07:19:33 -- setup/common.sh@40 -- # local part_no=1 00:04:24.283 07:19:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.283 07:19:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.283 07:19:33 -- setup/common.sh@44 -- # parts=() 00:04:24.283 07:19:33 -- setup/common.sh@44 -- # local parts 00:04:24.283 07:19:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.283 07:19:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.283 07:19:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.283 07:19:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.283 07:19:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.283 07:19:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.283 07:19:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:24.283 07:19:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:25.217 Creating new GPT entries in memory. 00:04:25.217 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.217 other utilities. 00:04:25.217 07:19:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.217 07:19:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.217 07:19:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.217 07:19:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.217 07:19:34 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:26.593 Creating new GPT entries in memory. 00:04:26.593 The operation has completed successfully. 00:04:26.593 07:19:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.593 07:19:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.593 07:19:35 -- setup/common.sh@62 -- # wait 53739 00:04:26.593 07:19:35 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.593 07:19:35 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:26.593 07:19:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.593 07:19:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:26.593 07:19:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:26.593 07:19:35 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.593 07:19:35 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.593 07:19:35 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:26.593 07:19:35 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:26.593 07:19:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.593 07:19:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.593 07:19:35 -- setup/devices.sh@53 -- # local found=0 00:04:26.593 07:19:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.593 07:19:35 -- setup/devices.sh@56 -- # : 00:04:26.593 07:19:35 -- setup/devices.sh@59 -- # local pci status 00:04:26.593 07:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.593 07:19:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:26.593 07:19:35 -- setup/devices.sh@47 -- # setup output config 00:04:26.593 07:19:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.593 07:19:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.593 07:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.593 07:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.852 07:19:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.852 07:19:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.852 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.852 07:19:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:26.852 07:19:36 -- setup/devices.sh@63 -- # found=1 00:04:26.852 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.852 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.852 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.110 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.110 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.110 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.110 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.110 07:19:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.110 07:19:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.110 07:19:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.110 07:19:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.110 07:19:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.110 07:19:36 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.110 07:19:36 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.110 07:19:36 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.110 07:19:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:27.110 07:19:36 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:27.110 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.110 07:19:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:27.110 07:19:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:27.677 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:27.677 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:27.677 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.677 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:27.677 07:19:36 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:27.677 07:19:36 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:27.677 07:19:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.677 07:19:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:27.677 07:19:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:27.677 07:19:36 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.677 07:19:36 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.677 07:19:36 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:27.677 07:19:36 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:27.677 07:19:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.677 07:19:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.677 07:19:36 -- setup/devices.sh@53 -- # local found=0 00:04:27.677 07:19:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.677 07:19:36 -- setup/devices.sh@56 -- # : 00:04:27.677 07:19:36 -- setup/devices.sh@59 -- # local pci status 00:04:27.677 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.677 07:19:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:27.677 07:19:36 -- setup/devices.sh@47 -- # setup output config 00:04:27.677 07:19:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.677 07:19:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.677 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.677 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.936 07:19:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.936 07:19:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.936 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.936 07:19:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:27.936 07:19:37 -- setup/devices.sh@63 -- # found=1 00:04:27.936 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.936 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.936 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.194 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.194 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.194 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.194 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.453 07:19:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.453 07:19:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:28.453 07:19:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.453 07:19:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.453 07:19:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.453 07:19:37 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.453 07:19:37 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:28.453 07:19:37 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:28.453 07:19:37 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:28.453 07:19:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:28.453 07:19:37 -- setup/devices.sh@51 -- # local test_file= 00:04:28.453 07:19:37 -- setup/devices.sh@53 -- # local found=0 00:04:28.453 07:19:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.453 07:19:37 -- setup/devices.sh@59 -- # local pci status 00:04:28.453 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.453 07:19:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:28.453 07:19:37 -- setup/devices.sh@47 -- # setup output config 00:04:28.453 07:19:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.453 07:19:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.453 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.453 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.711 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.711 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.969 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.969 07:19:37 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:28.969 07:19:37 -- setup/devices.sh@63 -- # found=1 00:04:28.969 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.969 07:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.969 07:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.969 07:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.969 07:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.969 07:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.969 07:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.228 07:19:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.228 07:19:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.228 07:19:38 -- setup/devices.sh@68 -- # return 0 00:04:29.228 07:19:38 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.228 07:19:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.228 07:19:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:29.228 07:19:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:29.228 07:19:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:29.228 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.228 00:04:29.228 real 0m4.853s 00:04:29.228 ************************************ 00:04:29.228 END TEST nvme_mount 00:04:29.228 ************************************ 00:04:29.228 user 0m0.943s 00:04:29.228 sys 0m1.190s 00:04:29.228 07:19:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.228 07:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:29.228 07:19:38 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.228 07:19:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.228 07:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.228 07:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:29.228 ************************************ 00:04:29.228 START TEST dm_mount 00:04:29.228 ************************************ 00:04:29.228 07:19:38 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:29.228 07:19:38 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:29.228 07:19:38 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:29.228 07:19:38 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:29.228 07:19:38 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:29.228 07:19:38 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:29.228 07:19:38 -- setup/common.sh@40 -- # local part_no=2 00:04:29.228 07:19:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.228 07:19:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.228 07:19:38 -- setup/common.sh@44 -- # parts=() 00:04:29.228 07:19:38 -- setup/common.sh@44 -- # local parts 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.228 07:19:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.228 07:19:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.228 07:19:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.228 07:19:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.228 07:19:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:29.228 07:19:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:30.161 Creating new GPT entries in memory. 00:04:30.161 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.161 other utilities. 00:04:30.161 07:19:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.161 07:19:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.161 07:19:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.161 07:19:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.161 07:19:39 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:31.535 Creating new GPT entries in memory. 00:04:31.535 The operation has completed successfully. 00:04:31.535 07:19:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.535 07:19:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.535 07:19:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.535 07:19:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.535 07:19:40 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:32.469 The operation has completed successfully. 00:04:32.469 07:19:41 -- setup/common.sh@57 -- # (( part++ )) 00:04:32.469 07:19:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.469 07:19:41 -- setup/common.sh@62 -- # wait 54367 00:04:32.469 07:19:41 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.469 07:19:41 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.469 07:19:41 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.469 07:19:41 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.469 07:19:41 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.469 07:19:41 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.469 07:19:41 -- setup/devices.sh@161 -- # break 00:04:32.469 07:19:41 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.469 07:19:41 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.469 07:19:41 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:32.469 07:19:41 -- setup/devices.sh@166 -- # dm=dm-0 00:04:32.469 07:19:41 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:32.469 07:19:41 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:32.469 07:19:41 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.469 07:19:41 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:32.469 07:19:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.469 07:19:41 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.469 07:19:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.469 07:19:41 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.469 07:19:41 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.469 07:19:41 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:32.469 07:19:41 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:32.469 07:19:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.469 07:19:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.469 07:19:41 -- setup/devices.sh@53 -- # local found=0 00:04:32.470 07:19:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.470 07:19:41 -- setup/devices.sh@56 -- # : 00:04:32.470 07:19:41 -- setup/devices.sh@59 -- # local pci status 00:04:32.470 07:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.470 07:19:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:32.470 07:19:41 -- setup/devices.sh@47 -- # setup output config 00:04:32.470 07:19:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.470 07:19:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.470 07:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.470 07:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.729 07:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.729 07:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.729 07:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.729 07:19:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:32.729 07:19:41 -- setup/devices.sh@63 -- # found=1 00:04:32.729 07:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.729 07:19:41 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.729 07:19:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.987 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.987 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.987 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.987 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.987 07:19:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.987 07:19:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:32.987 07:19:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.987 07:19:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.987 07:19:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.987 07:19:42 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.987 07:19:42 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:32.987 07:19:42 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:32.987 07:19:42 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:32.987 07:19:42 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.988 07:19:42 -- setup/devices.sh@51 -- # local test_file= 00:04:32.988 07:19:42 -- setup/devices.sh@53 -- # local found=0 00:04:32.988 07:19:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.988 07:19:42 -- setup/devices.sh@59 -- # local pci status 00:04:32.988 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.988 07:19:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:32.988 07:19:42 -- setup/devices.sh@47 -- # setup output config 00:04:32.988 07:19:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.988 07:19:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.245 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.245 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.245 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.245 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.503 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.503 07:19:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:33.503 07:19:42 -- setup/devices.sh@63 -- # found=1 00:04:33.503 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.503 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.503 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.503 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.503 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.761 07:19:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.762 07:19:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.762 07:19:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.762 07:19:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.762 07:19:42 -- setup/devices.sh@68 -- # return 0 00:04:33.762 07:19:42 -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.762 07:19:42 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.762 07:19:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.762 07:19:42 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.762 07:19:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:33.762 07:19:42 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:33.762 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.762 07:19:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:33.762 07:19:42 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:33.762 00:04:33.762 real 0m4.538s 00:04:33.762 user 0m0.624s 00:04:33.762 sys 0m0.826s 00:04:33.762 07:19:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.762 07:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:33.762 ************************************ 00:04:33.762 END TEST dm_mount 00:04:33.762 ************************************ 00:04:33.762 07:19:42 -- setup/devices.sh@1 -- # cleanup 00:04:33.762 07:19:42 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.762 07:19:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.762 07:19:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:33.762 07:19:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:33.762 07:19:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:33.762 07:19:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:34.020 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.020 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.020 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.020 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:34.020 07:19:43 -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.020 07:19:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.020 07:19:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.020 07:19:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:34.020 07:19:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:34.020 07:19:43 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:34.020 07:19:43 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:34.020 00:04:34.020 real 0m11.364s 00:04:34.020 user 0m2.326s 00:04:34.020 sys 0m2.694s 00:04:34.020 07:19:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.020 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.020 ************************************ 00:04:34.020 END TEST devices 00:04:34.020 ************************************ 00:04:34.020 00:04:34.020 real 0m41.118s 00:04:34.020 user 0m7.908s 00:04:34.020 sys 0m10.854s 00:04:34.020 07:19:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.020 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.020 ************************************ 00:04:34.020 END TEST setup.sh 00:04:34.020 ************************************ 00:04:34.020 07:19:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.279 Hugepages 00:04:34.279 node hugesize free / total 00:04:34.279 node0 1048576kB 0 / 0 00:04:34.279 node0 2048kB 2048 / 2048 00:04:34.279 00:04:34.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.279 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:34.279 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:34.538 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:34.538 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:34.538 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:34.538 07:19:43 -- spdk/autotest.sh@128 -- # uname -s 00:04:34.538 07:19:43 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:34.538 07:19:43 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:34.538 07:19:43 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.414 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.414 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.414 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.414 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.414 07:19:44 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:36.791 07:19:45 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:36.791 07:19:45 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:36.791 07:19:45 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.791 07:19:45 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:36.791 07:19:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:36.791 07:19:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:36.791 07:19:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.791 07:19:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.791 07:19:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:36.791 07:19:45 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:36.791 07:19:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:36.791 07:19:45 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.052 Waiting for block devices as requested 00:04:37.052 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.052 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.052 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.311 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.599 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:42.599 07:19:51 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.599 07:19:51 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1552 -- # continue 00:04:42.599 07:19:51 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.599 07:19:51 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1552 -- # continue 00:04:42.599 07:19:51 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.599 07:19:51 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1552 -- # continue 00:04:42.599 07:19:51 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.599 07:19:51 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:42.599 07:19:51 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:42.599 07:19:51 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:42.599 07:19:51 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:42.599 07:19:51 -- common/autotest_common.sh@1552 -- # continue 00:04:42.599 07:19:51 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:42.599 07:19:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.599 07:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:42.599 07:19:51 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:42.599 07:19:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.599 07:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:42.599 07:19:51 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.170 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.170 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.431 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.431 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.431 07:19:52 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:43.431 07:19:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.431 07:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:43.431 07:19:52 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:43.431 07:19:52 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:43.431 07:19:52 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.431 07:19:52 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:43.431 07:19:52 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:43.431 07:19:52 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:43.431 07:19:52 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:43.431 07:19:52 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:43.431 07:19:52 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.431 07:19:52 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:43.431 07:19:52 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:43.431 07:19:52 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.431 07:19:52 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.431 07:19:52 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.431 07:19:52 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.431 07:19:52 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.431 07:19:52 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.431 07:19:52 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:43.431 07:19:52 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:43.431 07:19:52 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.431 07:19:52 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:43.431 07:19:52 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:43.431 07:19:52 -- common/autotest_common.sh@1588 -- # return 0 00:04:43.431 07:19:52 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:43.431 07:19:52 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:43.431 07:19:52 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:43.431 07:19:52 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:43.431 07:19:52 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:43.431 07:19:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.431 07:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:43.431 07:19:52 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.431 07:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.431 07:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.432 07:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:43.432 ************************************ 00:04:43.432 START TEST env 00:04:43.432 ************************************ 00:04:43.432 07:19:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.694 * Looking for test storage... 00:04:43.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:43.694 07:19:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.694 07:19:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.694 07:19:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.694 07:19:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.694 07:19:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.694 07:19:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.694 07:19:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.694 07:19:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.694 07:19:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.694 07:19:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.694 07:19:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.694 07:19:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.694 07:19:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.694 07:19:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.694 07:19:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.694 07:19:52 -- scripts/common.sh@344 -- # : 1 00:04:43.694 07:19:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.694 07:19:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.694 07:19:52 -- scripts/common.sh@364 -- # decimal 1 00:04:43.694 07:19:52 -- scripts/common.sh@352 -- # local d=1 00:04:43.694 07:19:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.694 07:19:52 -- scripts/common.sh@354 -- # echo 1 00:04:43.694 07:19:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.694 07:19:52 -- scripts/common.sh@365 -- # decimal 2 00:04:43.694 07:19:52 -- scripts/common.sh@352 -- # local d=2 00:04:43.694 07:19:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.694 07:19:52 -- scripts/common.sh@354 -- # echo 2 00:04:43.694 07:19:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.694 07:19:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.694 07:19:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.694 07:19:52 -- scripts/common.sh@367 -- # return 0 00:04:43.694 07:19:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.694 --rc genhtml_branch_coverage=1 00:04:43.694 --rc genhtml_function_coverage=1 00:04:43.694 --rc genhtml_legend=1 00:04:43.694 --rc geninfo_all_blocks=1 00:04:43.694 --rc geninfo_unexecuted_blocks=1 00:04:43.694 00:04:43.694 ' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.694 --rc genhtml_branch_coverage=1 00:04:43.694 --rc genhtml_function_coverage=1 00:04:43.694 --rc genhtml_legend=1 00:04:43.694 --rc geninfo_all_blocks=1 00:04:43.694 --rc geninfo_unexecuted_blocks=1 00:04:43.694 00:04:43.694 ' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.694 --rc genhtml_branch_coverage=1 00:04:43.694 --rc genhtml_function_coverage=1 00:04:43.694 --rc genhtml_legend=1 00:04:43.694 --rc geninfo_all_blocks=1 00:04:43.694 --rc geninfo_unexecuted_blocks=1 00:04:43.694 00:04:43.694 ' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.694 --rc genhtml_branch_coverage=1 00:04:43.694 --rc genhtml_function_coverage=1 00:04:43.694 --rc genhtml_legend=1 00:04:43.694 --rc geninfo_all_blocks=1 00:04:43.694 --rc geninfo_unexecuted_blocks=1 00:04:43.694 00:04:43.694 ' 00:04:43.694 07:19:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.694 07:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.694 07:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.694 07:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:43.694 ************************************ 00:04:43.694 START TEST env_memory 00:04:43.694 ************************************ 00:04:43.694 07:19:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.694 00:04:43.694 00:04:43.694 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.694 http://cunit.sourceforge.net/ 00:04:43.694 00:04:43.694 00:04:43.694 Suite: memory 00:04:43.694 Test: alloc and free memory map ...[2024-11-19 07:19:52.840596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.694 passed 00:04:43.694 Test: mem map translation ...[2024-11-19 07:19:52.879195] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.694 [2024-11-19 07:19:52.879234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.694 [2024-11-19 07:19:52.879291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.694 [2024-11-19 07:19:52.879306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.694 passed 00:04:43.694 Test: mem map registration ...[2024-11-19 07:19:52.947460] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.694 [2024-11-19 07:19:52.947500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.955 passed 00:04:43.955 Test: mem map adjacent registrations ...passed 00:04:43.955 00:04:43.955 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.955 suites 1 1 n/a 0 0 00:04:43.955 tests 4 4 4 0 0 00:04:43.955 asserts 152 152 152 0 n/a 00:04:43.955 00:04:43.955 Elapsed time = 0.233 seconds 00:04:43.955 00:04:43.955 real 0m0.261s 00:04:43.955 user 0m0.236s 00:04:43.955 sys 0m0.020s 00:04:43.955 07:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.955 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:43.955 ************************************ 00:04:43.955 END TEST env_memory 00:04:43.955 ************************************ 00:04:43.955 07:19:53 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.956 07:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.956 07:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.956 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:43.956 ************************************ 00:04:43.956 START TEST env_vtophys 00:04:43.956 ************************************ 00:04:43.956 07:19:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:43.956 EAL: lib.eal log level changed from notice to debug 00:04:43.956 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 1 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 2 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 3 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 4 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 5 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 6 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 7 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 8 as core 0 on socket 0 00:04:43.956 EAL: Detected lcore 9 as core 0 on socket 0 00:04:43.956 EAL: Maximum logical cores by configuration: 128 00:04:43.956 EAL: Detected CPU lcores: 10 00:04:43.956 EAL: Detected NUMA nodes: 1 00:04:43.956 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:43.956 EAL: Detected shared linkage of DPDK 00:04:43.956 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.956 EAL: Selected IOVA mode 'PA' 00:04:43.956 EAL: Probing VFIO support... 00:04:43.956 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.956 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:43.956 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.956 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.956 EAL: Setting up physically contiguous memory... 00:04:43.956 EAL: Setting maximum number of open files to 524288 00:04:43.956 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.956 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.956 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.956 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.956 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.956 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.956 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.956 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.956 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.956 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.956 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.956 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.956 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.956 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.956 EAL: Hugepages will be freed exactly as allocated. 00:04:43.956 EAL: No shared files mode enabled, IPC is disabled 00:04:43.956 EAL: No shared files mode enabled, IPC is disabled 00:04:44.218 EAL: TSC frequency is ~2600000 KHz 00:04:44.218 EAL: Main lcore 0 is ready (tid=7ff55281ba40;cpuset=[0]) 00:04:44.218 EAL: Trying to obtain current memory policy. 00:04:44.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.218 EAL: Restoring previous memory policy: 0 00:04:44.218 EAL: request: mp_malloc_sync 00:04:44.218 EAL: No shared files mode enabled, IPC is disabled 00:04:44.218 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.218 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.218 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.218 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.218 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:44.218 00:04:44.218 00:04:44.218 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.218 http://cunit.sourceforge.net/ 00:04:44.218 00:04:44.218 00:04:44.218 Suite: components_suite 00:04:44.478 Test: vtophys_malloc_test ...passed 00:04:44.478 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.478 EAL: Restoring previous memory policy: 4 00:04:44.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.478 EAL: request: mp_malloc_sync 00:04:44.478 EAL: No shared files mode enabled, IPC is disabled 00:04:44.478 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.479 EAL: Trying to obtain current memory policy. 00:04:44.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.479 EAL: Restoring previous memory policy: 4 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.479 EAL: Trying to obtain current memory policy. 00:04:44.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.479 EAL: Restoring previous memory policy: 4 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.479 EAL: Trying to obtain current memory policy. 00:04:44.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.479 EAL: Restoring previous memory policy: 4 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.479 EAL: Trying to obtain current memory policy. 00:04:44.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.479 EAL: Restoring previous memory policy: 4 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.479 EAL: Trying to obtain current memory policy. 00:04:44.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.479 EAL: Restoring previous memory policy: 4 00:04:44.479 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.479 EAL: request: mp_malloc_sync 00:04:44.479 EAL: No shared files mode enabled, IPC is disabled 00:04:44.479 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.740 EAL: request: mp_malloc_sync 00:04:44.740 EAL: No shared files mode enabled, IPC is disabled 00:04:44.740 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.740 EAL: Trying to obtain current memory policy. 00:04:44.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.740 EAL: Restoring previous memory policy: 4 00:04:44.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.740 EAL: request: mp_malloc_sync 00:04:44.740 EAL: No shared files mode enabled, IPC is disabled 00:04:44.740 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.998 EAL: request: mp_malloc_sync 00:04:44.998 EAL: No shared files mode enabled, IPC is disabled 00:04:44.998 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.998 EAL: Trying to obtain current memory policy. 00:04:44.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.998 EAL: Restoring previous memory policy: 4 00:04:44.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.998 EAL: request: mp_malloc_sync 00:04:44.998 EAL: No shared files mode enabled, IPC is disabled 00:04:44.998 EAL: Heap on socket 0 was expanded by 258MB 00:04:45.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.514 EAL: request: mp_malloc_sync 00:04:45.514 EAL: No shared files mode enabled, IPC is disabled 00:04:45.514 EAL: Heap on socket 0 was shrunk by 258MB 00:04:45.514 EAL: Trying to obtain current memory policy. 00:04:45.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.775 EAL: Restoring previous memory policy: 4 00:04:45.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.775 EAL: request: mp_malloc_sync 00:04:45.775 EAL: No shared files mode enabled, IPC is disabled 00:04:45.775 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.346 EAL: request: mp_malloc_sync 00:04:46.346 EAL: No shared files mode enabled, IPC is disabled 00:04:46.346 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.919 EAL: Trying to obtain current memory policy. 00:04:46.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.919 EAL: Restoring previous memory policy: 4 00:04:46.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.919 EAL: request: mp_malloc_sync 00:04:46.919 EAL: No shared files mode enabled, IPC is disabled 00:04:46.919 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.308 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.308 EAL: request: mp_malloc_sync 00:04:48.308 EAL: No shared files mode enabled, IPC is disabled 00:04:48.308 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.296 passed 00:04:49.296 00:04:49.296 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.296 suites 1 1 n/a 0 0 00:04:49.296 tests 2 2 2 0 0 00:04:49.296 asserts 5327 5327 5327 0 n/a 00:04:49.296 00:04:49.296 Elapsed time = 4.880 seconds 00:04:49.296 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.296 EAL: request: mp_malloc_sync 00:04:49.296 EAL: No shared files mode enabled, IPC is disabled 00:04:49.296 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.296 EAL: No shared files mode enabled, IPC is disabled 00:04:49.296 EAL: No shared files mode enabled, IPC is disabled 00:04:49.296 EAL: No shared files mode enabled, IPC is disabled 00:04:49.296 00:04:49.296 real 0m5.133s 00:04:49.296 user 0m4.332s 00:04:49.296 sys 0m0.656s 00:04:49.296 07:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.296 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 END TEST env_vtophys 00:04:49.296 ************************************ 00:04:49.296 07:19:58 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:49.296 07:19:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.296 07:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.296 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 START TEST env_pci 00:04:49.296 ************************************ 00:04:49.296 07:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:49.296 00:04:49.296 00:04:49.296 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.296 http://cunit.sourceforge.net/ 00:04:49.296 00:04:49.296 00:04:49.296 Suite: pci 00:04:49.296 Test: pci_hook ...[2024-11-19 07:19:58.281582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56061 has claimed it 00:04:49.296 passed 00:04:49.296 00:04:49.296 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.296 suites 1 1 n/a 0 0 00:04:49.296 tests 1 1 1 0 0 00:04:49.296 asserts 25 25 25 0 n/a 00:04:49.296 00:04:49.296 Elapsed time = 0.005 seconds 00:04:49.296 EAL: Cannot find device (10000:00:01.0) 00:04:49.296 EAL: Failed to attach device on primary process 00:04:49.296 00:04:49.296 real 0m0.057s 00:04:49.296 user 0m0.021s 00:04:49.296 sys 0m0.035s 00:04:49.296 07:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.296 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 END TEST env_pci 00:04:49.296 ************************************ 00:04:49.296 07:19:58 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.296 07:19:58 -- env/env.sh@15 -- # uname 00:04:49.296 07:19:58 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.296 07:19:58 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.296 07:19:58 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.296 07:19:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:49.296 07:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.296 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 START TEST env_dpdk_post_init 00:04:49.296 ************************************ 00:04:49.296 07:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.296 EAL: Detected CPU lcores: 10 00:04:49.296 EAL: Detected NUMA nodes: 1 00:04:49.296 EAL: Detected shared linkage of DPDK 00:04:49.296 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.296 EAL: Selected IOVA mode 'PA' 00:04:49.296 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.296 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:49.296 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:49.296 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:49.296 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:49.558 Starting DPDK initialization... 00:04:49.558 Starting SPDK post initialization... 00:04:49.558 SPDK NVMe probe 00:04:49.558 Attaching to 0000:00:06.0 00:04:49.558 Attaching to 0000:00:07.0 00:04:49.558 Attaching to 0000:00:08.0 00:04:49.558 Attaching to 0000:00:09.0 00:04:49.558 Attached to 0000:00:06.0 00:04:49.558 Attached to 0000:00:07.0 00:04:49.558 Attached to 0000:00:09.0 00:04:49.558 Attached to 0000:00:08.0 00:04:49.558 Cleaning up... 00:04:49.558 00:04:49.558 real 0m0.217s 00:04:49.558 user 0m0.063s 00:04:49.558 sys 0m0.056s 00:04:49.558 07:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.558 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.558 ************************************ 00:04:49.558 END TEST env_dpdk_post_init 00:04:49.558 ************************************ 00:04:49.558 07:19:58 -- env/env.sh@26 -- # uname 00:04:49.558 07:19:58 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.558 07:19:58 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.558 07:19:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.558 07:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.558 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.558 ************************************ 00:04:49.558 START TEST env_mem_callbacks 00:04:49.558 ************************************ 00:04:49.558 07:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.558 EAL: Detected CPU lcores: 10 00:04:49.558 EAL: Detected NUMA nodes: 1 00:04:49.558 EAL: Detected shared linkage of DPDK 00:04:49.558 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.558 EAL: Selected IOVA mode 'PA' 00:04:49.558 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.558 00:04:49.558 00:04:49.558 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.558 http://cunit.sourceforge.net/ 00:04:49.558 00:04:49.558 00:04:49.558 Suite: memory 00:04:49.558 Test: test ... 00:04:49.558 register 0x200000200000 2097152 00:04:49.558 malloc 3145728 00:04:49.558 register 0x200000400000 4194304 00:04:49.558 buf 0x2000004fffc0 len 3145728 PASSED 00:04:49.558 malloc 64 00:04:49.558 buf 0x2000004ffec0 len 64 PASSED 00:04:49.558 malloc 4194304 00:04:49.558 register 0x200000800000 6291456 00:04:49.558 buf 0x2000009fffc0 len 4194304 PASSED 00:04:49.558 free 0x2000004fffc0 3145728 00:04:49.558 free 0x2000004ffec0 64 00:04:49.558 unregister 0x200000400000 4194304 PASSED 00:04:49.558 free 0x2000009fffc0 4194304 00:04:49.558 unregister 0x200000800000 6291456 PASSED 00:04:49.558 malloc 8388608 00:04:49.558 register 0x200000400000 10485760 00:04:49.558 buf 0x2000005fffc0 len 8388608 PASSED 00:04:49.558 free 0x2000005fffc0 8388608 00:04:49.558 unregister 0x200000400000 10485760 PASSED 00:04:49.558 passed 00:04:49.558 00:04:49.558 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.558 suites 1 1 n/a 0 0 00:04:49.558 tests 1 1 1 0 0 00:04:49.558 asserts 15 15 15 0 n/a 00:04:49.558 00:04:49.558 Elapsed time = 0.039 seconds 00:04:49.820 00:04:49.820 real 0m0.204s 00:04:49.820 user 0m0.055s 00:04:49.820 sys 0m0.048s 00:04:49.820 07:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.820 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.820 ************************************ 00:04:49.820 END TEST env_mem_callbacks 00:04:49.820 ************************************ 00:04:49.820 00:04:49.820 real 0m6.192s 00:04:49.820 user 0m4.852s 00:04:49.820 sys 0m0.994s 00:04:49.820 07:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.820 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.820 ************************************ 00:04:49.820 END TEST env 00:04:49.820 ************************************ 00:04:49.820 07:19:58 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.820 07:19:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.820 07:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.820 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.820 ************************************ 00:04:49.820 START TEST rpc 00:04:49.820 ************************************ 00:04:49.820 07:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.820 * Looking for test storage... 00:04:49.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.820 07:19:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.820 07:19:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.820 07:19:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.820 07:19:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.820 07:19:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.820 07:19:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.820 07:19:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.820 07:19:59 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.820 07:19:59 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.820 07:19:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.820 07:19:59 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.820 07:19:59 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.820 07:19:59 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.820 07:19:59 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.820 07:19:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.820 07:19:59 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.820 07:19:59 -- scripts/common.sh@344 -- # : 1 00:04:49.820 07:19:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.820 07:19:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.820 07:19:59 -- scripts/common.sh@364 -- # decimal 1 00:04:49.820 07:19:59 -- scripts/common.sh@352 -- # local d=1 00:04:49.820 07:19:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.820 07:19:59 -- scripts/common.sh@354 -- # echo 1 00:04:49.820 07:19:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.820 07:19:59 -- scripts/common.sh@365 -- # decimal 2 00:04:49.820 07:19:59 -- scripts/common.sh@352 -- # local d=2 00:04:49.820 07:19:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.820 07:19:59 -- scripts/common.sh@354 -- # echo 2 00:04:49.820 07:19:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.820 07:19:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.820 07:19:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.820 07:19:59 -- scripts/common.sh@367 -- # return 0 00:04:49.820 07:19:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.820 07:19:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 07:19:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 07:19:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 07:19:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.820 --rc genhtml_branch_coverage=1 00:04:49.820 --rc genhtml_function_coverage=1 00:04:49.820 --rc genhtml_legend=1 00:04:49.820 --rc geninfo_all_blocks=1 00:04:49.820 --rc geninfo_unexecuted_blocks=1 00:04:49.820 00:04:49.820 ' 00:04:49.820 07:19:59 -- rpc/rpc.sh@65 -- # spdk_pid=56187 00:04:49.820 07:19:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.820 07:19:59 -- rpc/rpc.sh@67 -- # waitforlisten 56187 00:04:49.820 07:19:59 -- common/autotest_common.sh@829 -- # '[' -z 56187 ']' 00:04:49.820 07:19:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.820 07:19:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.821 07:19:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.821 07:19:59 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:49.821 07:19:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.821 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.082 [2024-11-19 07:19:59.099384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.082 [2024-11-19 07:19:59.099498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56187 ] 00:04:50.082 [2024-11-19 07:19:59.246168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.343 [2024-11-19 07:19:59.391773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.343 [2024-11-19 07:19:59.391919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:50.343 [2024-11-19 07:19:59.391931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56187' to capture a snapshot of events at runtime. 00:04:50.343 [2024-11-19 07:19:59.391938] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56187 for offline analysis/debug. 00:04:50.343 [2024-11-19 07:19:59.391964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.911 07:19:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.911 07:19:59 -- common/autotest_common.sh@862 -- # return 0 00:04:50.911 07:19:59 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.911 07:19:59 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.911 07:19:59 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.911 07:19:59 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.911 07:19:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.911 07:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.911 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.911 ************************************ 00:04:50.911 START TEST rpc_integrity 00:04:50.911 ************************************ 00:04:50.911 07:19:59 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:50.911 07:19:59 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.911 07:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.911 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.911 07:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.911 07:19:59 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.911 07:19:59 -- rpc/rpc.sh@13 -- # jq length 00:04:50.911 07:19:59 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.911 07:19:59 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.911 07:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.911 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.911 07:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.911 07:19:59 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.911 07:19:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.911 07:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.911 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.911 07:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.911 07:19:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.911 { 00:04:50.911 "name": "Malloc0", 00:04:50.911 "aliases": [ 00:04:50.911 "efc9e6e9-c12f-4c34-b651-740bd8622886" 00:04:50.911 ], 00:04:50.911 "product_name": "Malloc disk", 00:04:50.911 "block_size": 512, 00:04:50.911 "num_blocks": 16384, 00:04:50.911 "uuid": "efc9e6e9-c12f-4c34-b651-740bd8622886", 00:04:50.911 "assigned_rate_limits": { 00:04:50.911 "rw_ios_per_sec": 0, 00:04:50.911 "rw_mbytes_per_sec": 0, 00:04:50.911 "r_mbytes_per_sec": 0, 00:04:50.911 "w_mbytes_per_sec": 0 00:04:50.911 }, 00:04:50.911 "claimed": false, 00:04:50.911 "zoned": false, 00:04:50.911 "supported_io_types": { 00:04:50.911 "read": true, 00:04:50.911 "write": true, 00:04:50.911 "unmap": true, 00:04:50.911 "write_zeroes": true, 00:04:50.911 "flush": true, 00:04:50.911 "reset": true, 00:04:50.911 "compare": false, 00:04:50.911 "compare_and_write": false, 00:04:50.911 "abort": true, 00:04:50.911 "nvme_admin": false, 00:04:50.911 "nvme_io": false 00:04:50.911 }, 00:04:50.911 "memory_domains": [ 00:04:50.911 { 00:04:50.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.911 "dma_device_type": 2 00:04:50.911 } 00:04:50.911 ], 00:04:50.911 "driver_specific": {} 00:04:50.911 } 00:04:50.911 ]' 00:04:50.911 07:19:59 -- rpc/rpc.sh@17 -- # jq length 00:04:50.911 07:19:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.911 07:19:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.911 07:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.911 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.911 [2024-11-19 07:19:59.983246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.911 [2024-11-19 07:19:59.983295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.911 [2024-11-19 07:19:59.983312] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:50.912 [2024-11-19 07:19:59.983321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.912 [2024-11-19 07:19:59.984955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.912 [2024-11-19 07:19:59.984988] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.912 Passthru0 00:04:50.912 07:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:19:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.912 07:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:20:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.912 { 00:04:50.912 "name": "Malloc0", 00:04:50.912 "aliases": [ 00:04:50.912 "efc9e6e9-c12f-4c34-b651-740bd8622886" 00:04:50.912 ], 00:04:50.912 "product_name": "Malloc disk", 00:04:50.912 "block_size": 512, 00:04:50.912 "num_blocks": 16384, 00:04:50.912 "uuid": "efc9e6e9-c12f-4c34-b651-740bd8622886", 00:04:50.912 "assigned_rate_limits": { 00:04:50.912 "rw_ios_per_sec": 0, 00:04:50.912 "rw_mbytes_per_sec": 0, 00:04:50.912 "r_mbytes_per_sec": 0, 00:04:50.912 "w_mbytes_per_sec": 0 00:04:50.912 }, 00:04:50.912 "claimed": true, 00:04:50.912 "claim_type": "exclusive_write", 00:04:50.912 "zoned": false, 00:04:50.912 "supported_io_types": { 00:04:50.912 "read": true, 00:04:50.912 "write": true, 00:04:50.912 "unmap": true, 00:04:50.912 "write_zeroes": true, 00:04:50.912 "flush": true, 00:04:50.912 "reset": true, 00:04:50.912 "compare": false, 00:04:50.912 "compare_and_write": false, 00:04:50.912 "abort": true, 00:04:50.912 "nvme_admin": false, 00:04:50.912 "nvme_io": false 00:04:50.912 }, 00:04:50.912 "memory_domains": [ 00:04:50.912 { 00:04:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.912 "dma_device_type": 2 00:04:50.912 } 00:04:50.912 ], 00:04:50.912 "driver_specific": {} 00:04:50.912 }, 00:04:50.912 { 00:04:50.912 "name": "Passthru0", 00:04:50.912 "aliases": [ 00:04:50.912 "e390cf5e-9c6b-58ec-8008-b463ecf4cff5" 00:04:50.912 ], 00:04:50.912 "product_name": "passthru", 00:04:50.912 "block_size": 512, 00:04:50.912 "num_blocks": 16384, 00:04:50.912 "uuid": "e390cf5e-9c6b-58ec-8008-b463ecf4cff5", 00:04:50.912 "assigned_rate_limits": { 00:04:50.912 "rw_ios_per_sec": 0, 00:04:50.912 "rw_mbytes_per_sec": 0, 00:04:50.912 "r_mbytes_per_sec": 0, 00:04:50.912 "w_mbytes_per_sec": 0 00:04:50.912 }, 00:04:50.912 "claimed": false, 00:04:50.912 "zoned": false, 00:04:50.912 "supported_io_types": { 00:04:50.912 "read": true, 00:04:50.912 "write": true, 00:04:50.912 "unmap": true, 00:04:50.912 "write_zeroes": true, 00:04:50.912 "flush": true, 00:04:50.912 "reset": true, 00:04:50.912 "compare": false, 00:04:50.912 "compare_and_write": false, 00:04:50.912 "abort": true, 00:04:50.912 "nvme_admin": false, 00:04:50.912 "nvme_io": false 00:04:50.912 }, 00:04:50.912 "memory_domains": [ 00:04:50.912 { 00:04:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.912 "dma_device_type": 2 00:04:50.912 } 00:04:50.912 ], 00:04:50.912 "driver_specific": { 00:04:50.912 "passthru": { 00:04:50.912 "name": "Passthru0", 00:04:50.912 "base_bdev_name": "Malloc0" 00:04:50.912 } 00:04:50.912 } 00:04:50.912 } 00:04:50.912 ]' 00:04:50.912 07:20:00 -- rpc/rpc.sh@21 -- # jq length 00:04:50.912 07:20:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.912 07:20:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.912 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:20:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.912 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:20:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.912 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:20:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.912 07:20:00 -- rpc/rpc.sh@26 -- # jq length 00:04:50.912 07:20:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.912 00:04:50.912 real 0m0.229s 00:04:50.912 user 0m0.128s 00:04:50.912 sys 0m0.031s 00:04:50.912 07:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 ************************************ 00:04:50.912 END TEST rpc_integrity 00:04:50.912 ************************************ 00:04:50.912 07:20:00 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.912 07:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.912 07:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 ************************************ 00:04:50.912 START TEST rpc_plugins 00:04:50.912 ************************************ 00:04:50.912 07:20:00 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:50.912 07:20:00 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.912 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.912 07:20:00 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.912 07:20:00 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.912 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.912 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.171 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.171 07:20:00 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.171 { 00:04:51.171 "name": "Malloc1", 00:04:51.171 "aliases": [ 00:04:51.171 "c692356f-c081-4f63-84df-590459bebe0c" 00:04:51.171 ], 00:04:51.171 "product_name": "Malloc disk", 00:04:51.171 "block_size": 4096, 00:04:51.171 "num_blocks": 256, 00:04:51.171 "uuid": "c692356f-c081-4f63-84df-590459bebe0c", 00:04:51.171 "assigned_rate_limits": { 00:04:51.171 "rw_ios_per_sec": 0, 00:04:51.171 "rw_mbytes_per_sec": 0, 00:04:51.171 "r_mbytes_per_sec": 0, 00:04:51.171 "w_mbytes_per_sec": 0 00:04:51.171 }, 00:04:51.171 "claimed": false, 00:04:51.171 "zoned": false, 00:04:51.171 "supported_io_types": { 00:04:51.171 "read": true, 00:04:51.171 "write": true, 00:04:51.171 "unmap": true, 00:04:51.171 "write_zeroes": true, 00:04:51.171 "flush": true, 00:04:51.171 "reset": true, 00:04:51.171 "compare": false, 00:04:51.171 "compare_and_write": false, 00:04:51.171 "abort": true, 00:04:51.171 "nvme_admin": false, 00:04:51.171 "nvme_io": false 00:04:51.171 }, 00:04:51.171 "memory_domains": [ 00:04:51.171 { 00:04:51.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.171 "dma_device_type": 2 00:04:51.171 } 00:04:51.171 ], 00:04:51.171 "driver_specific": {} 00:04:51.171 } 00:04:51.171 ]' 00:04:51.171 07:20:00 -- rpc/rpc.sh@32 -- # jq length 00:04:51.171 07:20:00 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.171 07:20:00 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.171 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.171 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.171 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.171 07:20:00 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.171 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.171 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.171 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.171 07:20:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.172 07:20:00 -- rpc/rpc.sh@36 -- # jq length 00:04:51.172 07:20:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.172 00:04:51.172 real 0m0.111s 00:04:51.172 user 0m0.062s 00:04:51.172 sys 0m0.019s 00:04:51.172 07:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.172 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.172 ************************************ 00:04:51.172 END TEST rpc_plugins 00:04:51.172 ************************************ 00:04:51.172 07:20:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.172 07:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.172 07:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.172 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.172 ************************************ 00:04:51.172 START TEST rpc_trace_cmd_test 00:04:51.172 ************************************ 00:04:51.172 07:20:00 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:51.172 07:20:00 -- rpc/rpc.sh@40 -- # local info 00:04:51.172 07:20:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.172 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.172 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.172 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.172 07:20:00 -- rpc/rpc.sh@42 -- # info='{ 00:04:51.172 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56187", 00:04:51.172 "tpoint_group_mask": "0x8", 00:04:51.172 "iscsi_conn": { 00:04:51.172 "mask": "0x2", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "scsi": { 00:04:51.172 "mask": "0x4", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "bdev": { 00:04:51.172 "mask": "0x8", 00:04:51.172 "tpoint_mask": "0xffffffffffffffff" 00:04:51.172 }, 00:04:51.172 "nvmf_rdma": { 00:04:51.172 "mask": "0x10", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "nvmf_tcp": { 00:04:51.172 "mask": "0x20", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "ftl": { 00:04:51.172 "mask": "0x40", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "blobfs": { 00:04:51.172 "mask": "0x80", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "dsa": { 00:04:51.172 "mask": "0x200", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "thread": { 00:04:51.172 "mask": "0x400", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "nvme_pcie": { 00:04:51.172 "mask": "0x800", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "iaa": { 00:04:51.172 "mask": "0x1000", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "nvme_tcp": { 00:04:51.172 "mask": "0x2000", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 }, 00:04:51.172 "bdev_nvme": { 00:04:51.172 "mask": "0x4000", 00:04:51.172 "tpoint_mask": "0x0" 00:04:51.172 } 00:04:51.172 }' 00:04:51.172 07:20:00 -- rpc/rpc.sh@43 -- # jq length 00:04:51.172 07:20:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:51.172 07:20:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.172 07:20:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.172 07:20:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.172 07:20:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.172 07:20:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.432 07:20:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.432 07:20:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.432 07:20:00 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.432 00:04:51.432 real 0m0.157s 00:04:51.432 user 0m0.127s 00:04:51.432 sys 0m0.024s 00:04:51.432 07:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.432 ************************************ 00:04:51.432 END TEST rpc_trace_cmd_test 00:04:51.432 ************************************ 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 07:20:00 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.432 07:20:00 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.432 07:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.432 07:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 ************************************ 00:04:51.432 START TEST rpc_daemon_integrity 00:04:51.432 ************************************ 00:04:51.432 07:20:00 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:51.432 07:20:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.432 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.432 07:20:00 -- rpc/rpc.sh@13 -- # jq length 00:04:51.432 07:20:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.432 07:20:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.432 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.432 07:20:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.432 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.432 { 00:04:51.432 "name": "Malloc2", 00:04:51.432 "aliases": [ 00:04:51.432 "7ea5a81c-cff6-4bfa-8867-4adbc8168887" 00:04:51.432 ], 00:04:51.432 "product_name": "Malloc disk", 00:04:51.432 "block_size": 512, 00:04:51.432 "num_blocks": 16384, 00:04:51.432 "uuid": "7ea5a81c-cff6-4bfa-8867-4adbc8168887", 00:04:51.432 "assigned_rate_limits": { 00:04:51.432 "rw_ios_per_sec": 0, 00:04:51.432 "rw_mbytes_per_sec": 0, 00:04:51.432 "r_mbytes_per_sec": 0, 00:04:51.432 "w_mbytes_per_sec": 0 00:04:51.432 }, 00:04:51.432 "claimed": false, 00:04:51.432 "zoned": false, 00:04:51.432 "supported_io_types": { 00:04:51.432 "read": true, 00:04:51.432 "write": true, 00:04:51.432 "unmap": true, 00:04:51.432 "write_zeroes": true, 00:04:51.432 "flush": true, 00:04:51.432 "reset": true, 00:04:51.432 "compare": false, 00:04:51.432 "compare_and_write": false, 00:04:51.432 "abort": true, 00:04:51.432 "nvme_admin": false, 00:04:51.432 "nvme_io": false 00:04:51.432 }, 00:04:51.432 "memory_domains": [ 00:04:51.432 { 00:04:51.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.432 "dma_device_type": 2 00:04:51.432 } 00:04:51.432 ], 00:04:51.432 "driver_specific": {} 00:04:51.432 } 00:04:51.432 ]' 00:04:51.432 07:20:00 -- rpc/rpc.sh@17 -- # jq length 00:04:51.432 07:20:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.432 07:20:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.432 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 [2024-11-19 07:20:00.597561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.432 [2024-11-19 07:20:00.597608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.432 [2024-11-19 07:20:00.597624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:51.432 [2024-11-19 07:20:00.597634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.432 [2024-11-19 07:20:00.599326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.432 [2024-11-19 07:20:00.599359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.432 Passthru0 00:04:51.432 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.432 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.432 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.432 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.432 07:20:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.432 { 00:04:51.432 "name": "Malloc2", 00:04:51.432 "aliases": [ 00:04:51.432 "7ea5a81c-cff6-4bfa-8867-4adbc8168887" 00:04:51.432 ], 00:04:51.432 "product_name": "Malloc disk", 00:04:51.432 "block_size": 512, 00:04:51.432 "num_blocks": 16384, 00:04:51.432 "uuid": "7ea5a81c-cff6-4bfa-8867-4adbc8168887", 00:04:51.432 "assigned_rate_limits": { 00:04:51.432 "rw_ios_per_sec": 0, 00:04:51.432 "rw_mbytes_per_sec": 0, 00:04:51.432 "r_mbytes_per_sec": 0, 00:04:51.432 "w_mbytes_per_sec": 0 00:04:51.432 }, 00:04:51.432 "claimed": true, 00:04:51.432 "claim_type": "exclusive_write", 00:04:51.432 "zoned": false, 00:04:51.432 "supported_io_types": { 00:04:51.432 "read": true, 00:04:51.432 "write": true, 00:04:51.432 "unmap": true, 00:04:51.433 "write_zeroes": true, 00:04:51.433 "flush": true, 00:04:51.433 "reset": true, 00:04:51.433 "compare": false, 00:04:51.433 "compare_and_write": false, 00:04:51.433 "abort": true, 00:04:51.433 "nvme_admin": false, 00:04:51.433 "nvme_io": false 00:04:51.433 }, 00:04:51.433 "memory_domains": [ 00:04:51.433 { 00:04:51.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.433 "dma_device_type": 2 00:04:51.433 } 00:04:51.433 ], 00:04:51.433 "driver_specific": {} 00:04:51.433 }, 00:04:51.433 { 00:04:51.433 "name": "Passthru0", 00:04:51.433 "aliases": [ 00:04:51.433 "24b2f5fd-e70a-56aa-a105-45a288caf022" 00:04:51.433 ], 00:04:51.433 "product_name": "passthru", 00:04:51.433 "block_size": 512, 00:04:51.433 "num_blocks": 16384, 00:04:51.433 "uuid": "24b2f5fd-e70a-56aa-a105-45a288caf022", 00:04:51.433 "assigned_rate_limits": { 00:04:51.433 "rw_ios_per_sec": 0, 00:04:51.433 "rw_mbytes_per_sec": 0, 00:04:51.433 "r_mbytes_per_sec": 0, 00:04:51.433 "w_mbytes_per_sec": 0 00:04:51.433 }, 00:04:51.433 "claimed": false, 00:04:51.433 "zoned": false, 00:04:51.433 "supported_io_types": { 00:04:51.433 "read": true, 00:04:51.433 "write": true, 00:04:51.433 "unmap": true, 00:04:51.433 "write_zeroes": true, 00:04:51.433 "flush": true, 00:04:51.433 "reset": true, 00:04:51.433 "compare": false, 00:04:51.433 "compare_and_write": false, 00:04:51.433 "abort": true, 00:04:51.433 "nvme_admin": false, 00:04:51.433 "nvme_io": false 00:04:51.433 }, 00:04:51.433 "memory_domains": [ 00:04:51.433 { 00:04:51.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.433 "dma_device_type": 2 00:04:51.433 } 00:04:51.433 ], 00:04:51.433 "driver_specific": { 00:04:51.433 "passthru": { 00:04:51.433 "name": "Passthru0", 00:04:51.433 "base_bdev_name": "Malloc2" 00:04:51.433 } 00:04:51.433 } 00:04:51.433 } 00:04:51.433 ]' 00:04:51.433 07:20:00 -- rpc/rpc.sh@21 -- # jq length 00:04:51.433 07:20:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.433 07:20:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.433 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.433 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.433 07:20:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.433 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.433 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.433 07:20:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.433 07:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.433 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.433 07:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.433 07:20:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.693 07:20:00 -- rpc/rpc.sh@26 -- # jq length 00:04:51.693 07:20:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.693 00:04:51.693 real 0m0.232s 00:04:51.693 user 0m0.126s 00:04:51.693 sys 0m0.028s 00:04:51.693 07:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.693 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.693 ************************************ 00:04:51.693 END TEST rpc_daemon_integrity 00:04:51.693 ************************************ 00:04:51.693 07:20:00 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.693 07:20:00 -- rpc/rpc.sh@84 -- # killprocess 56187 00:04:51.693 07:20:00 -- common/autotest_common.sh@936 -- # '[' -z 56187 ']' 00:04:51.693 07:20:00 -- common/autotest_common.sh@940 -- # kill -0 56187 00:04:51.693 07:20:00 -- common/autotest_common.sh@941 -- # uname 00:04:51.693 07:20:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.693 07:20:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56187 00:04:51.693 07:20:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.693 07:20:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.693 07:20:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56187' 00:04:51.693 killing process with pid 56187 00:04:51.693 07:20:00 -- common/autotest_common.sh@955 -- # kill 56187 00:04:51.693 07:20:00 -- common/autotest_common.sh@960 -- # wait 56187 00:04:53.077 00:04:53.077 real 0m3.054s 00:04:53.077 user 0m3.411s 00:04:53.077 sys 0m0.578s 00:04:53.077 07:20:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.077 07:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.077 ************************************ 00:04:53.077 END TEST rpc 00:04:53.077 ************************************ 00:04:53.077 07:20:01 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.077 07:20:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.077 07:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.077 07:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.077 ************************************ 00:04:53.077 START TEST rpc_client 00:04:53.077 ************************************ 00:04:53.077 07:20:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.077 * Looking for test storage... 00:04:53.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:53.077 07:20:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.077 07:20:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.077 07:20:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.077 07:20:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.077 07:20:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.077 07:20:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.077 07:20:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.077 07:20:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.077 07:20:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.078 07:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.078 07:20:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.078 07:20:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.078 07:20:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.078 07:20:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.078 07:20:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.078 07:20:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.078 07:20:02 -- scripts/common.sh@344 -- # : 1 00:04:53.078 07:20:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.078 07:20:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.078 07:20:02 -- scripts/common.sh@364 -- # decimal 1 00:04:53.078 07:20:02 -- scripts/common.sh@352 -- # local d=1 00:04:53.078 07:20:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.078 07:20:02 -- scripts/common.sh@354 -- # echo 1 00:04:53.078 07:20:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.078 07:20:02 -- scripts/common.sh@365 -- # decimal 2 00:04:53.078 07:20:02 -- scripts/common.sh@352 -- # local d=2 00:04:53.078 07:20:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.078 07:20:02 -- scripts/common.sh@354 -- # echo 2 00:04:53.078 07:20:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.078 07:20:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.078 07:20:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.078 07:20:02 -- scripts/common.sh@367 -- # return 0 00:04:53.078 07:20:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.078 --rc genhtml_branch_coverage=1 00:04:53.078 --rc genhtml_function_coverage=1 00:04:53.078 --rc genhtml_legend=1 00:04:53.078 --rc geninfo_all_blocks=1 00:04:53.078 --rc geninfo_unexecuted_blocks=1 00:04:53.078 00:04:53.078 ' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.078 --rc genhtml_branch_coverage=1 00:04:53.078 --rc genhtml_function_coverage=1 00:04:53.078 --rc genhtml_legend=1 00:04:53.078 --rc geninfo_all_blocks=1 00:04:53.078 --rc geninfo_unexecuted_blocks=1 00:04:53.078 00:04:53.078 ' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.078 --rc genhtml_branch_coverage=1 00:04:53.078 --rc genhtml_function_coverage=1 00:04:53.078 --rc genhtml_legend=1 00:04:53.078 --rc geninfo_all_blocks=1 00:04:53.078 --rc geninfo_unexecuted_blocks=1 00:04:53.078 00:04:53.078 ' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.078 --rc genhtml_branch_coverage=1 00:04:53.078 --rc genhtml_function_coverage=1 00:04:53.078 --rc genhtml_legend=1 00:04:53.078 --rc geninfo_all_blocks=1 00:04:53.078 --rc geninfo_unexecuted_blocks=1 00:04:53.078 00:04:53.078 ' 00:04:53.078 07:20:02 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:53.078 OK 00:04:53.078 07:20:02 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.078 00:04:53.078 real 0m0.179s 00:04:53.078 user 0m0.099s 00:04:53.078 sys 0m0.088s 00:04:53.078 07:20:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.078 ************************************ 00:04:53.078 END TEST rpc_client 00:04:53.078 ************************************ 00:04:53.078 07:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.078 07:20:02 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.078 07:20:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.078 07:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.078 ************************************ 00:04:53.078 START TEST json_config 00:04:53.078 ************************************ 00:04:53.078 07:20:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.078 07:20:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.078 07:20:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.078 07:20:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.078 07:20:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.078 07:20:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.078 07:20:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.078 07:20:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.078 07:20:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.078 07:20:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.078 07:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.078 07:20:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.078 07:20:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.078 07:20:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.078 07:20:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.078 07:20:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.078 07:20:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.078 07:20:02 -- scripts/common.sh@344 -- # : 1 00:04:53.078 07:20:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.078 07:20:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.340 07:20:02 -- scripts/common.sh@364 -- # decimal 1 00:04:53.340 07:20:02 -- scripts/common.sh@352 -- # local d=1 00:04:53.340 07:20:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.340 07:20:02 -- scripts/common.sh@354 -- # echo 1 00:04:53.340 07:20:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.340 07:20:02 -- scripts/common.sh@365 -- # decimal 2 00:04:53.341 07:20:02 -- scripts/common.sh@352 -- # local d=2 00:04:53.341 07:20:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.341 07:20:02 -- scripts/common.sh@354 -- # echo 2 00:04:53.341 07:20:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.341 07:20:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.341 07:20:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.341 07:20:02 -- scripts/common.sh@367 -- # return 0 00:04:53.341 07:20:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.341 00:04:53.341 ' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.341 00:04:53.341 ' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.341 00:04:53.341 ' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.341 00:04:53.341 ' 00:04:53.341 07:20:02 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.341 07:20:02 -- nvmf/common.sh@7 -- # uname -s 00:04:53.341 07:20:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.341 07:20:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.341 07:20:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.341 07:20:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.341 07:20:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.341 07:20:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.341 07:20:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.341 07:20:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.341 07:20:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.341 07:20:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.341 07:20:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:323a6621-08ea-4853-8a15-1f16326b6ad3 00:04:53.341 07:20:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=323a6621-08ea-4853-8a15-1f16326b6ad3 00:04:53.341 07:20:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.341 07:20:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.341 07:20:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.341 07:20:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.341 07:20:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.341 07:20:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.341 07:20:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.341 07:20:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.341 07:20:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.341 07:20:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.341 07:20:02 -- paths/export.sh@5 -- # export PATH 00:04:53.341 07:20:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.341 07:20:02 -- nvmf/common.sh@46 -- # : 0 00:04:53.341 07:20:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:53.341 07:20:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:53.341 07:20:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:53.341 07:20:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.341 07:20:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.341 07:20:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:53.341 07:20:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:53.341 07:20:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:53.341 07:20:02 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:53.341 07:20:02 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:53.341 07:20:02 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:53.341 07:20:02 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.341 07:20:02 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:53.341 WARNING: No tests are enabled so not running JSON configuration tests 00:04:53.341 07:20:02 -- json_config/json_config.sh@27 -- # exit 0 00:04:53.341 00:04:53.341 real 0m0.137s 00:04:53.341 user 0m0.087s 00:04:53.341 sys 0m0.053s 00:04:53.341 07:20:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.341 ************************************ 00:04:53.341 END TEST json_config 00:04:53.341 ************************************ 00:04:53.341 07:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.341 07:20:02 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.341 07:20:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.341 07:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.341 ************************************ 00:04:53.341 START TEST json_config_extra_key 00:04:53.341 ************************************ 00:04:53.341 07:20:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:53.341 07:20:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.341 07:20:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.341 07:20:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.341 07:20:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.341 07:20:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.341 07:20:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.341 07:20:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.341 07:20:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.341 07:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.341 07:20:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.341 07:20:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.341 07:20:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.341 07:20:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.341 07:20:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.341 07:20:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.341 07:20:02 -- scripts/common.sh@344 -- # : 1 00:04:53.341 07:20:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.341 07:20:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.341 07:20:02 -- scripts/common.sh@364 -- # decimal 1 00:04:53.341 07:20:02 -- scripts/common.sh@352 -- # local d=1 00:04:53.341 07:20:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.341 07:20:02 -- scripts/common.sh@354 -- # echo 1 00:04:53.341 07:20:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.341 07:20:02 -- scripts/common.sh@365 -- # decimal 2 00:04:53.341 07:20:02 -- scripts/common.sh@352 -- # local d=2 00:04:53.341 07:20:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.341 07:20:02 -- scripts/common.sh@354 -- # echo 2 00:04:53.341 07:20:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.341 07:20:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.341 07:20:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.341 07:20:02 -- scripts/common.sh@367 -- # return 0 00:04:53.341 07:20:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.341 00:04:53.341 ' 00:04:53.341 07:20:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.341 --rc genhtml_branch_coverage=1 00:04:53.341 --rc genhtml_function_coverage=1 00:04:53.341 --rc genhtml_legend=1 00:04:53.341 --rc geninfo_all_blocks=1 00:04:53.341 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 07:20:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 07:20:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.342 --rc genhtml_branch_coverage=1 00:04:53.342 --rc genhtml_function_coverage=1 00:04:53.342 --rc genhtml_legend=1 00:04:53.342 --rc geninfo_all_blocks=1 00:04:53.342 --rc geninfo_unexecuted_blocks=1 00:04:53.342 00:04:53.342 ' 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.342 07:20:02 -- nvmf/common.sh@7 -- # uname -s 00:04:53.342 07:20:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.342 07:20:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.342 07:20:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.342 07:20:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.342 07:20:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.342 07:20:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.342 07:20:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.342 07:20:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.342 07:20:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.342 07:20:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.342 07:20:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:323a6621-08ea-4853-8a15-1f16326b6ad3 00:04:53.342 07:20:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=323a6621-08ea-4853-8a15-1f16326b6ad3 00:04:53.342 07:20:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.342 07:20:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.342 07:20:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.342 07:20:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.342 07:20:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.342 07:20:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.342 07:20:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.342 07:20:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.342 07:20:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.342 07:20:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.342 07:20:02 -- paths/export.sh@5 -- # export PATH 00:04:53.342 07:20:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.342 07:20:02 -- nvmf/common.sh@46 -- # : 0 00:04:53.342 07:20:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:53.342 07:20:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:53.342 07:20:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:53.342 07:20:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.342 07:20:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.342 07:20:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:53.342 07:20:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:53.342 07:20:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:53.342 INFO: launching applications... 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56481 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:53.342 Waiting for target to run... 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56481 /var/tmp/spdk_tgt.sock 00:04:53.342 07:20:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.342 07:20:02 -- common/autotest_common.sh@829 -- # '[' -z 56481 ']' 00:04:53.342 07:20:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.342 07:20:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.342 07:20:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.342 07:20:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.342 07:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.342 [2024-11-19 07:20:02.583494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.342 [2024-11-19 07:20:02.583691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56481 ] 00:04:53.914 [2024-11-19 07:20:02.873092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.914 [2024-11-19 07:20:03.085524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.914 [2024-11-19 07:20:03.085948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.292 07:20:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.292 07:20:04 -- common/autotest_common.sh@862 -- # return 0 00:04:55.292 00:04:55.292 INFO: shutting down applications... 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56481 ]] 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56481 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56481 00:04:55.292 07:20:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:55.554 07:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:55.554 07:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:55.554 07:20:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56481 00:04:55.554 07:20:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:56.125 07:20:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:56.125 07:20:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.125 07:20:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56481 00:04:56.125 07:20:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:56.386 07:20:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:56.386 07:20:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.386 07:20:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56481 00:04:56.386 07:20:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:56.958 SPDK target shutdown done 00:04:56.958 07:20:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:56.958 07:20:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.958 07:20:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56481 00:04:56.958 07:20:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:56.959 07:20:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:56.959 07:20:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:56.959 07:20:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:56.959 Success 00:04:56.959 07:20:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:56.959 ************************************ 00:04:56.959 END TEST json_config_extra_key 00:04:56.959 ************************************ 00:04:56.959 00:04:56.959 real 0m3.741s 00:04:56.959 user 0m3.650s 00:04:56.959 sys 0m0.391s 00:04:56.959 07:20:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.959 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:04:56.959 07:20:06 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.959 07:20:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.959 07:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.959 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:04:56.959 ************************************ 00:04:56.959 START TEST alias_rpc 00:04:56.959 ************************************ 00:04:56.959 07:20:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.220 * Looking for test storage... 00:04:57.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:57.220 07:20:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.220 07:20:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.220 07:20:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.220 07:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.220 07:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.220 07:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.220 07:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.220 07:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.220 07:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.220 07:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.220 07:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.220 07:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.220 07:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.220 07:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.220 07:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.220 07:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.220 07:20:06 -- scripts/common.sh@344 -- # : 1 00:04:57.220 07:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.220 07:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.220 07:20:06 -- scripts/common.sh@364 -- # decimal 1 00:04:57.220 07:20:06 -- scripts/common.sh@352 -- # local d=1 00:04:57.220 07:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.220 07:20:06 -- scripts/common.sh@354 -- # echo 1 00:04:57.220 07:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.220 07:20:06 -- scripts/common.sh@365 -- # decimal 2 00:04:57.220 07:20:06 -- scripts/common.sh@352 -- # local d=2 00:04:57.220 07:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.220 07:20:06 -- scripts/common.sh@354 -- # echo 2 00:04:57.220 07:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.220 07:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.220 07:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.220 07:20:06 -- scripts/common.sh@367 -- # return 0 00:04:57.220 07:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.220 07:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.220 --rc genhtml_branch_coverage=1 00:04:57.220 --rc genhtml_function_coverage=1 00:04:57.220 --rc genhtml_legend=1 00:04:57.220 --rc geninfo_all_blocks=1 00:04:57.220 --rc geninfo_unexecuted_blocks=1 00:04:57.220 00:04:57.220 ' 00:04:57.220 07:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.220 --rc genhtml_branch_coverage=1 00:04:57.220 --rc genhtml_function_coverage=1 00:04:57.220 --rc genhtml_legend=1 00:04:57.220 --rc geninfo_all_blocks=1 00:04:57.220 --rc geninfo_unexecuted_blocks=1 00:04:57.220 00:04:57.220 ' 00:04:57.220 07:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.220 --rc genhtml_branch_coverage=1 00:04:57.220 --rc genhtml_function_coverage=1 00:04:57.220 --rc genhtml_legend=1 00:04:57.220 --rc geninfo_all_blocks=1 00:04:57.220 --rc geninfo_unexecuted_blocks=1 00:04:57.220 00:04:57.220 ' 00:04:57.220 07:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.220 --rc genhtml_branch_coverage=1 00:04:57.220 --rc genhtml_function_coverage=1 00:04:57.220 --rc genhtml_legend=1 00:04:57.220 --rc geninfo_all_blocks=1 00:04:57.220 --rc geninfo_unexecuted_blocks=1 00:04:57.220 00:04:57.220 ' 00:04:57.220 07:20:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.220 07:20:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56587 00:04:57.220 07:20:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56587 00:04:57.220 07:20:06 -- common/autotest_common.sh@829 -- # '[' -z 56587 ']' 00:04:57.220 07:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.220 07:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.220 07:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.220 07:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.220 07:20:06 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.220 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:04:57.220 [2024-11-19 07:20:06.378085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.220 [2024-11-19 07:20:06.378210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56587 ] 00:04:57.481 [2024-11-19 07:20:06.524058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.481 [2024-11-19 07:20:06.695998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.481 [2024-11-19 07:20:06.696219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.866 07:20:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.866 07:20:07 -- common/autotest_common.sh@862 -- # return 0 00:04:58.866 07:20:07 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:58.866 07:20:08 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56587 00:04:58.866 07:20:08 -- common/autotest_common.sh@936 -- # '[' -z 56587 ']' 00:04:58.866 07:20:08 -- common/autotest_common.sh@940 -- # kill -0 56587 00:04:58.866 07:20:08 -- common/autotest_common.sh@941 -- # uname 00:04:58.866 07:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.866 07:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56587 00:04:59.127 07:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.127 07:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.127 killing process with pid 56587 00:04:59.127 07:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56587' 00:04:59.127 07:20:08 -- common/autotest_common.sh@955 -- # kill 56587 00:04:59.127 07:20:08 -- common/autotest_common.sh@960 -- # wait 56587 00:05:01.046 00:05:01.046 real 0m3.703s 00:05:01.046 user 0m3.923s 00:05:01.046 sys 0m0.420s 00:05:01.047 ************************************ 00:05:01.047 END TEST alias_rpc 00:05:01.047 ************************************ 00:05:01.047 07:20:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.047 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 07:20:09 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:01.047 07:20:09 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:01.047 07:20:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.047 07:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.047 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 ************************************ 00:05:01.047 START TEST spdkcli_tcp 00:05:01.047 ************************************ 00:05:01.047 07:20:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:01.047 * Looking for test storage... 00:05:01.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:01.047 07:20:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:01.047 07:20:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:01.047 07:20:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.047 07:20:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.047 07:20:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.047 07:20:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.047 07:20:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.047 07:20:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.047 07:20:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.047 07:20:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.047 07:20:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.047 07:20:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.047 07:20:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.047 07:20:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.047 07:20:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.047 07:20:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.047 07:20:10 -- scripts/common.sh@344 -- # : 1 00:05:01.047 07:20:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.047 07:20:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.047 07:20:10 -- scripts/common.sh@364 -- # decimal 1 00:05:01.047 07:20:10 -- scripts/common.sh@352 -- # local d=1 00:05:01.047 07:20:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.047 07:20:10 -- scripts/common.sh@354 -- # echo 1 00:05:01.047 07:20:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.047 07:20:10 -- scripts/common.sh@365 -- # decimal 2 00:05:01.047 07:20:10 -- scripts/common.sh@352 -- # local d=2 00:05:01.047 07:20:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.047 07:20:10 -- scripts/common.sh@354 -- # echo 2 00:05:01.047 07:20:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.047 07:20:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.047 07:20:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.047 07:20:10 -- scripts/common.sh@367 -- # return 0 00:05:01.047 07:20:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.047 07:20:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.047 --rc genhtml_branch_coverage=1 00:05:01.047 --rc genhtml_function_coverage=1 00:05:01.047 --rc genhtml_legend=1 00:05:01.047 --rc geninfo_all_blocks=1 00:05:01.047 --rc geninfo_unexecuted_blocks=1 00:05:01.047 00:05:01.047 ' 00:05:01.047 07:20:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.047 --rc genhtml_branch_coverage=1 00:05:01.047 --rc genhtml_function_coverage=1 00:05:01.047 --rc genhtml_legend=1 00:05:01.047 --rc geninfo_all_blocks=1 00:05:01.047 --rc geninfo_unexecuted_blocks=1 00:05:01.047 00:05:01.047 ' 00:05:01.047 07:20:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.047 --rc genhtml_branch_coverage=1 00:05:01.047 --rc genhtml_function_coverage=1 00:05:01.047 --rc genhtml_legend=1 00:05:01.047 --rc geninfo_all_blocks=1 00:05:01.047 --rc geninfo_unexecuted_blocks=1 00:05:01.047 00:05:01.047 ' 00:05:01.047 07:20:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.047 --rc genhtml_branch_coverage=1 00:05:01.047 --rc genhtml_function_coverage=1 00:05:01.047 --rc genhtml_legend=1 00:05:01.047 --rc geninfo_all_blocks=1 00:05:01.047 --rc geninfo_unexecuted_blocks=1 00:05:01.047 00:05:01.047 ' 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:01.047 07:20:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:01.047 07:20:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.047 07:20:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.047 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56695 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@27 -- # waitforlisten 56695 00:05:01.047 07:20:10 -- common/autotest_common.sh@829 -- # '[' -z 56695 ']' 00:05:01.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.047 07:20:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.047 07:20:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.047 07:20:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.047 07:20:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.047 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 07:20:10 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.047 [2024-11-19 07:20:10.175669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.047 [2024-11-19 07:20:10.175815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56695 ] 00:05:01.308 [2024-11-19 07:20:10.327544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.308 [2024-11-19 07:20:10.523590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.308 [2024-11-19 07:20:10.523955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.308 [2024-11-19 07:20:10.524018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.692 07:20:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.692 07:20:11 -- common/autotest_common.sh@862 -- # return 0 00:05:02.692 07:20:11 -- spdkcli/tcp.sh@31 -- # socat_pid=56714 00:05:02.692 07:20:11 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.692 07:20:11 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.692 [ 00:05:02.692 "bdev_malloc_delete", 00:05:02.692 "bdev_malloc_create", 00:05:02.692 "bdev_null_resize", 00:05:02.692 "bdev_null_delete", 00:05:02.692 "bdev_null_create", 00:05:02.692 "bdev_nvme_cuse_unregister", 00:05:02.692 "bdev_nvme_cuse_register", 00:05:02.692 "bdev_opal_new_user", 00:05:02.692 "bdev_opal_set_lock_state", 00:05:02.692 "bdev_opal_delete", 00:05:02.692 "bdev_opal_get_info", 00:05:02.692 "bdev_opal_create", 00:05:02.692 "bdev_nvme_opal_revert", 00:05:02.692 "bdev_nvme_opal_init", 00:05:02.692 "bdev_nvme_send_cmd", 00:05:02.692 "bdev_nvme_get_path_iostat", 00:05:02.692 "bdev_nvme_get_mdns_discovery_info", 00:05:02.692 "bdev_nvme_stop_mdns_discovery", 00:05:02.692 "bdev_nvme_start_mdns_discovery", 00:05:02.692 "bdev_nvme_set_multipath_policy", 00:05:02.692 "bdev_nvme_set_preferred_path", 00:05:02.692 "bdev_nvme_get_io_paths", 00:05:02.692 "bdev_nvme_remove_error_injection", 00:05:02.692 "bdev_nvme_add_error_injection", 00:05:02.692 "bdev_nvme_get_discovery_info", 00:05:02.692 "bdev_nvme_stop_discovery", 00:05:02.692 "bdev_nvme_start_discovery", 00:05:02.692 "bdev_nvme_get_controller_health_info", 00:05:02.692 "bdev_nvme_disable_controller", 00:05:02.692 "bdev_nvme_enable_controller", 00:05:02.692 "bdev_nvme_reset_controller", 00:05:02.692 "bdev_nvme_get_transport_statistics", 00:05:02.692 "bdev_nvme_apply_firmware", 00:05:02.692 "bdev_nvme_detach_controller", 00:05:02.692 "bdev_nvme_get_controllers", 00:05:02.692 "bdev_nvme_attach_controller", 00:05:02.692 "bdev_nvme_set_hotplug", 00:05:02.692 "bdev_nvme_set_options", 00:05:02.692 "bdev_passthru_delete", 00:05:02.692 "bdev_passthru_create", 00:05:02.692 "bdev_lvol_grow_lvstore", 00:05:02.692 "bdev_lvol_get_lvols", 00:05:02.692 "bdev_lvol_get_lvstores", 00:05:02.692 "bdev_lvol_delete", 00:05:02.692 "bdev_lvol_set_read_only", 00:05:02.692 "bdev_lvol_resize", 00:05:02.692 "bdev_lvol_decouple_parent", 00:05:02.692 "bdev_lvol_inflate", 00:05:02.692 "bdev_lvol_rename", 00:05:02.692 "bdev_lvol_clone_bdev", 00:05:02.692 "bdev_lvol_clone", 00:05:02.692 "bdev_lvol_snapshot", 00:05:02.692 "bdev_lvol_create", 00:05:02.692 "bdev_lvol_delete_lvstore", 00:05:02.692 "bdev_lvol_rename_lvstore", 00:05:02.692 "bdev_lvol_create_lvstore", 00:05:02.692 "bdev_raid_set_options", 00:05:02.692 "bdev_raid_remove_base_bdev", 00:05:02.692 "bdev_raid_add_base_bdev", 00:05:02.692 "bdev_raid_delete", 00:05:02.692 "bdev_raid_create", 00:05:02.692 "bdev_raid_get_bdevs", 00:05:02.692 "bdev_error_inject_error", 00:05:02.692 "bdev_error_delete", 00:05:02.692 "bdev_error_create", 00:05:02.692 "bdev_split_delete", 00:05:02.692 "bdev_split_create", 00:05:02.692 "bdev_delay_delete", 00:05:02.692 "bdev_delay_create", 00:05:02.692 "bdev_delay_update_latency", 00:05:02.693 "bdev_zone_block_delete", 00:05:02.693 "bdev_zone_block_create", 00:05:02.693 "blobfs_create", 00:05:02.693 "blobfs_detect", 00:05:02.693 "blobfs_set_cache_size", 00:05:02.693 "bdev_xnvme_delete", 00:05:02.693 "bdev_xnvme_create", 00:05:02.693 "bdev_aio_delete", 00:05:02.693 "bdev_aio_rescan", 00:05:02.693 "bdev_aio_create", 00:05:02.693 "bdev_ftl_set_property", 00:05:02.693 "bdev_ftl_get_properties", 00:05:02.693 "bdev_ftl_get_stats", 00:05:02.693 "bdev_ftl_unmap", 00:05:02.693 "bdev_ftl_unload", 00:05:02.693 "bdev_ftl_delete", 00:05:02.693 "bdev_ftl_load", 00:05:02.693 "bdev_ftl_create", 00:05:02.693 "bdev_virtio_attach_controller", 00:05:02.693 "bdev_virtio_scsi_get_devices", 00:05:02.693 "bdev_virtio_detach_controller", 00:05:02.693 "bdev_virtio_blk_set_hotplug", 00:05:02.693 "bdev_iscsi_delete", 00:05:02.693 "bdev_iscsi_create", 00:05:02.693 "bdev_iscsi_set_options", 00:05:02.693 "accel_error_inject_error", 00:05:02.693 "ioat_scan_accel_module", 00:05:02.693 "dsa_scan_accel_module", 00:05:02.693 "iaa_scan_accel_module", 00:05:02.693 "iscsi_set_options", 00:05:02.693 "iscsi_get_auth_groups", 00:05:02.693 "iscsi_auth_group_remove_secret", 00:05:02.693 "iscsi_auth_group_add_secret", 00:05:02.693 "iscsi_delete_auth_group", 00:05:02.693 "iscsi_create_auth_group", 00:05:02.693 "iscsi_set_discovery_auth", 00:05:02.693 "iscsi_get_options", 00:05:02.693 "iscsi_target_node_request_logout", 00:05:02.693 "iscsi_target_node_set_redirect", 00:05:02.693 "iscsi_target_node_set_auth", 00:05:02.693 "iscsi_target_node_add_lun", 00:05:02.693 "iscsi_get_connections", 00:05:02.693 "iscsi_portal_group_set_auth", 00:05:02.693 "iscsi_start_portal_group", 00:05:02.693 "iscsi_delete_portal_group", 00:05:02.693 "iscsi_create_portal_group", 00:05:02.693 "iscsi_get_portal_groups", 00:05:02.693 "iscsi_delete_target_node", 00:05:02.693 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.693 "iscsi_target_node_add_pg_ig_maps", 00:05:02.693 "iscsi_create_target_node", 00:05:02.693 "iscsi_get_target_nodes", 00:05:02.693 "iscsi_delete_initiator_group", 00:05:02.693 "iscsi_initiator_group_remove_initiators", 00:05:02.693 "iscsi_initiator_group_add_initiators", 00:05:02.693 "iscsi_create_initiator_group", 00:05:02.693 "iscsi_get_initiator_groups", 00:05:02.693 "nvmf_set_crdt", 00:05:02.693 "nvmf_set_config", 00:05:02.693 "nvmf_set_max_subsystems", 00:05:02.693 "nvmf_subsystem_get_listeners", 00:05:02.693 "nvmf_subsystem_get_qpairs", 00:05:02.693 "nvmf_subsystem_get_controllers", 00:05:02.693 "nvmf_get_stats", 00:05:02.693 "nvmf_get_transports", 00:05:02.693 "nvmf_create_transport", 00:05:02.693 "nvmf_get_targets", 00:05:02.693 "nvmf_delete_target", 00:05:02.693 "nvmf_create_target", 00:05:02.693 "nvmf_subsystem_allow_any_host", 00:05:02.693 "nvmf_subsystem_remove_host", 00:05:02.693 "nvmf_subsystem_add_host", 00:05:02.693 "nvmf_subsystem_remove_ns", 00:05:02.693 "nvmf_subsystem_add_ns", 00:05:02.693 "nvmf_subsystem_listener_set_ana_state", 00:05:02.693 "nvmf_discovery_get_referrals", 00:05:02.693 "nvmf_discovery_remove_referral", 00:05:02.693 "nvmf_discovery_add_referral", 00:05:02.693 "nvmf_subsystem_remove_listener", 00:05:02.693 "nvmf_subsystem_add_listener", 00:05:02.693 "nvmf_delete_subsystem", 00:05:02.693 "nvmf_create_subsystem", 00:05:02.693 "nvmf_get_subsystems", 00:05:02.693 "env_dpdk_get_mem_stats", 00:05:02.693 "nbd_get_disks", 00:05:02.693 "nbd_stop_disk", 00:05:02.693 "nbd_start_disk", 00:05:02.693 "ublk_recover_disk", 00:05:02.693 "ublk_get_disks", 00:05:02.693 "ublk_stop_disk", 00:05:02.693 "ublk_start_disk", 00:05:02.693 "ublk_destroy_target", 00:05:02.693 "ublk_create_target", 00:05:02.693 "virtio_blk_create_transport", 00:05:02.693 "virtio_blk_get_transports", 00:05:02.693 "vhost_controller_set_coalescing", 00:05:02.693 "vhost_get_controllers", 00:05:02.693 "vhost_delete_controller", 00:05:02.693 "vhost_create_blk_controller", 00:05:02.693 "vhost_scsi_controller_remove_target", 00:05:02.693 "vhost_scsi_controller_add_target", 00:05:02.693 "vhost_start_scsi_controller", 00:05:02.693 "vhost_create_scsi_controller", 00:05:02.693 "thread_set_cpumask", 00:05:02.693 "framework_get_scheduler", 00:05:02.693 "framework_set_scheduler", 00:05:02.693 "framework_get_reactors", 00:05:02.693 "thread_get_io_channels", 00:05:02.693 "thread_get_pollers", 00:05:02.693 "thread_get_stats", 00:05:02.693 "framework_monitor_context_switch", 00:05:02.693 "spdk_kill_instance", 00:05:02.693 "log_enable_timestamps", 00:05:02.693 "log_get_flags", 00:05:02.693 "log_clear_flag", 00:05:02.693 "log_set_flag", 00:05:02.693 "log_get_level", 00:05:02.693 "log_set_level", 00:05:02.693 "log_get_print_level", 00:05:02.693 "log_set_print_level", 00:05:02.693 "framework_enable_cpumask_locks", 00:05:02.693 "framework_disable_cpumask_locks", 00:05:02.693 "framework_wait_init", 00:05:02.693 "framework_start_init", 00:05:02.693 "scsi_get_devices", 00:05:02.693 "bdev_get_histogram", 00:05:02.693 "bdev_enable_histogram", 00:05:02.693 "bdev_set_qos_limit", 00:05:02.693 "bdev_set_qd_sampling_period", 00:05:02.693 "bdev_get_bdevs", 00:05:02.693 "bdev_reset_iostat", 00:05:02.693 "bdev_get_iostat", 00:05:02.693 "bdev_examine", 00:05:02.693 "bdev_wait_for_examine", 00:05:02.693 "bdev_set_options", 00:05:02.693 "notify_get_notifications", 00:05:02.693 "notify_get_types", 00:05:02.693 "accel_get_stats", 00:05:02.693 "accel_set_options", 00:05:02.693 "accel_set_driver", 00:05:02.693 "accel_crypto_key_destroy", 00:05:02.693 "accel_crypto_keys_get", 00:05:02.693 "accel_crypto_key_create", 00:05:02.693 "accel_assign_opc", 00:05:02.693 "accel_get_module_info", 00:05:02.693 "accel_get_opc_assignments", 00:05:02.693 "vmd_rescan", 00:05:02.693 "vmd_remove_device", 00:05:02.693 "vmd_enable", 00:05:02.693 "sock_set_default_impl", 00:05:02.693 "sock_impl_set_options", 00:05:02.693 "sock_impl_get_options", 00:05:02.693 "iobuf_get_stats", 00:05:02.693 "iobuf_set_options", 00:05:02.693 "framework_get_pci_devices", 00:05:02.693 "framework_get_config", 00:05:02.693 "framework_get_subsystems", 00:05:02.693 "trace_get_info", 00:05:02.693 "trace_get_tpoint_group_mask", 00:05:02.693 "trace_disable_tpoint_group", 00:05:02.693 "trace_enable_tpoint_group", 00:05:02.693 "trace_clear_tpoint_mask", 00:05:02.693 "trace_set_tpoint_mask", 00:05:02.693 "spdk_get_version", 00:05:02.693 "rpc_get_methods" 00:05:02.693 ] 00:05:02.955 07:20:11 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.955 07:20:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.955 07:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.955 07:20:11 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.955 07:20:11 -- spdkcli/tcp.sh@38 -- # killprocess 56695 00:05:02.955 07:20:11 -- common/autotest_common.sh@936 -- # '[' -z 56695 ']' 00:05:02.955 07:20:11 -- common/autotest_common.sh@940 -- # kill -0 56695 00:05:02.955 07:20:11 -- common/autotest_common.sh@941 -- # uname 00:05:02.955 07:20:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.955 07:20:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56695 00:05:02.955 07:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.955 07:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.955 killing process with pid 56695 00:05:02.955 07:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56695' 00:05:02.955 07:20:12 -- common/autotest_common.sh@955 -- # kill 56695 00:05:02.955 07:20:12 -- common/autotest_common.sh@960 -- # wait 56695 00:05:04.405 00:05:04.405 real 0m3.250s 00:05:04.405 user 0m5.903s 00:05:04.405 sys 0m0.517s 00:05:04.405 ************************************ 00:05:04.405 END TEST spdkcli_tcp 00:05:04.405 ************************************ 00:05:04.405 07:20:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.405 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.405 07:20:13 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.405 07:20:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.405 07:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.405 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.405 ************************************ 00:05:04.405 START TEST dpdk_mem_utility 00:05:04.405 ************************************ 00:05:04.405 07:20:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.405 * Looking for test storage... 00:05:04.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:04.405 07:20:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.405 07:20:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.405 07:20:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.405 07:20:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.405 07:20:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.405 07:20:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.405 07:20:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.405 07:20:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.405 07:20:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.405 07:20:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.405 07:20:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.405 07:20:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.405 07:20:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.405 07:20:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.405 07:20:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.405 07:20:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.405 07:20:13 -- scripts/common.sh@344 -- # : 1 00:05:04.405 07:20:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.405 07:20:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.405 07:20:13 -- scripts/common.sh@364 -- # decimal 1 00:05:04.405 07:20:13 -- scripts/common.sh@352 -- # local d=1 00:05:04.405 07:20:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.405 07:20:13 -- scripts/common.sh@354 -- # echo 1 00:05:04.405 07:20:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.405 07:20:13 -- scripts/common.sh@365 -- # decimal 2 00:05:04.406 07:20:13 -- scripts/common.sh@352 -- # local d=2 00:05:04.406 07:20:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.406 07:20:13 -- scripts/common.sh@354 -- # echo 2 00:05:04.406 07:20:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.406 07:20:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.406 07:20:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.406 07:20:13 -- scripts/common.sh@367 -- # return 0 00:05:04.406 07:20:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.406 07:20:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.406 --rc genhtml_branch_coverage=1 00:05:04.406 --rc genhtml_function_coverage=1 00:05:04.406 --rc genhtml_legend=1 00:05:04.406 --rc geninfo_all_blocks=1 00:05:04.406 --rc geninfo_unexecuted_blocks=1 00:05:04.406 00:05:04.406 ' 00:05:04.406 07:20:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.406 --rc genhtml_branch_coverage=1 00:05:04.406 --rc genhtml_function_coverage=1 00:05:04.406 --rc genhtml_legend=1 00:05:04.406 --rc geninfo_all_blocks=1 00:05:04.406 --rc geninfo_unexecuted_blocks=1 00:05:04.406 00:05:04.406 ' 00:05:04.406 07:20:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.406 --rc genhtml_branch_coverage=1 00:05:04.406 --rc genhtml_function_coverage=1 00:05:04.406 --rc genhtml_legend=1 00:05:04.406 --rc geninfo_all_blocks=1 00:05:04.406 --rc geninfo_unexecuted_blocks=1 00:05:04.406 00:05:04.406 ' 00:05:04.406 07:20:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.406 --rc genhtml_branch_coverage=1 00:05:04.406 --rc genhtml_function_coverage=1 00:05:04.406 --rc genhtml_legend=1 00:05:04.406 --rc geninfo_all_blocks=1 00:05:04.406 --rc geninfo_unexecuted_blocks=1 00:05:04.406 00:05:04.406 ' 00:05:04.406 07:20:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.406 07:20:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56807 00:05:04.406 07:20:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56807 00:05:04.406 07:20:13 -- common/autotest_common.sh@829 -- # '[' -z 56807 ']' 00:05:04.406 07:20:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.406 07:20:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.406 07:20:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.406 07:20:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.406 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.406 07:20:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.406 [2024-11-19 07:20:13.478023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.406 [2024-11-19 07:20:13.478209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56807 ] 00:05:04.406 [2024-11-19 07:20:13.628892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.666 [2024-11-19 07:20:13.854737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.666 [2024-11-19 07:20:13.854962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.054 07:20:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.054 07:20:15 -- common/autotest_common.sh@862 -- # return 0 00:05:06.054 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:06.054 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:06.054 07:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.054 07:20:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.054 { 00:05:06.054 "filename": "/tmp/spdk_mem_dump.txt" 00:05:06.054 } 00:05:06.054 07:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.054 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.054 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:06.054 1 heaps totaling size 820.000000 MiB 00:05:06.054 size: 820.000000 MiB heap id: 0 00:05:06.054 end heaps---------- 00:05:06.054 8 mempools totaling size 598.116089 MiB 00:05:06.054 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:06.054 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:06.054 size: 84.521057 MiB name: bdev_io_56807 00:05:06.054 size: 51.011292 MiB name: evtpool_56807 00:05:06.054 size: 50.003479 MiB name: msgpool_56807 00:05:06.054 size: 21.763794 MiB name: PDU_Pool 00:05:06.054 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:06.054 size: 0.026123 MiB name: Session_Pool 00:05:06.054 end mempools------- 00:05:06.054 6 memzones totaling size 4.142822 MiB 00:05:06.054 size: 1.000366 MiB name: RG_ring_0_56807 00:05:06.054 size: 1.000366 MiB name: RG_ring_1_56807 00:05:06.054 size: 1.000366 MiB name: RG_ring_4_56807 00:05:06.054 size: 1.000366 MiB name: RG_ring_5_56807 00:05:06.054 size: 0.125366 MiB name: RG_ring_2_56807 00:05:06.054 size: 0.015991 MiB name: RG_ring_3_56807 00:05:06.054 end memzones------- 00:05:06.054 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:06.054 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:05:06.054 list of free elements. size: 18.451294 MiB 00:05:06.054 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:06.054 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:06.054 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:06.054 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:06.054 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:06.054 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:06.054 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:06.054 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:06.054 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:06.054 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:06.054 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:06.054 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:06.054 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:06.054 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:06.054 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:06.054 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:06.054 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:06.054 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:06.054 list of standard malloc elements. size: 199.284302 MiB 00:05:06.054 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:06.054 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:06.054 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:06.054 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:06.054 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:06.054 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:06.054 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:06.054 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:06.054 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:06.054 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:06.054 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:06.054 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:06.054 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:06.055 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:06.056 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:06.056 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:06.056 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:06.056 list of memzone associated elements. size: 602.264404 MiB 00:05:06.056 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:06.056 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:06.056 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:06.056 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:06.056 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:06.056 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56807_0 00:05:06.056 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:06.056 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56807_0 00:05:06.056 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:06.056 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56807_0 00:05:06.056 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:06.056 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:06.056 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:06.056 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:06.056 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:06.056 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56807 00:05:06.056 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:06.056 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56807 00:05:06.056 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:06.056 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56807 00:05:06.056 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:06.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:06.056 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:06.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:06.056 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:06.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:06.056 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:06.056 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:06.056 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:06.056 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56807 00:05:06.056 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:06.056 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56807 00:05:06.056 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:06.056 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56807 00:05:06.056 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:06.056 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56807 00:05:06.056 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:06.056 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56807 00:05:06.056 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:06.056 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:06.057 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:06.057 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:06.057 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:06.057 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:06.057 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:06.057 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56807 00:05:06.057 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:06.057 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:06.057 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:06.057 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:06.057 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:06.057 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56807 00:05:06.057 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:06.057 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:06.057 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:06.057 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56807 00:05:06.057 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:06.057 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56807 00:05:06.057 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:06.057 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:06.057 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:06.057 07:20:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56807 00:05:06.057 07:20:15 -- common/autotest_common.sh@936 -- # '[' -z 56807 ']' 00:05:06.057 07:20:15 -- common/autotest_common.sh@940 -- # kill -0 56807 00:05:06.057 07:20:15 -- common/autotest_common.sh@941 -- # uname 00:05:06.057 07:20:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.057 07:20:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56807 00:05:06.057 07:20:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.057 07:20:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.057 killing process with pid 56807 00:05:06.057 07:20:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56807' 00:05:06.057 07:20:15 -- common/autotest_common.sh@955 -- # kill 56807 00:05:06.057 07:20:15 -- common/autotest_common.sh@960 -- # wait 56807 00:05:07.444 00:05:07.444 real 0m3.335s 00:05:07.444 user 0m3.410s 00:05:07.444 sys 0m0.527s 00:05:07.444 ************************************ 00:05:07.444 END TEST dpdk_mem_utility 00:05:07.444 ************************************ 00:05:07.444 07:20:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.444 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 07:20:16 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.444 07:20:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.444 07:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.444 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 ************************************ 00:05:07.444 START TEST event 00:05:07.444 ************************************ 00:05:07.444 07:20:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.444 * Looking for test storage... 00:05:07.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.706 07:20:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:07.706 07:20:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:07.706 07:20:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:07.706 07:20:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:07.706 07:20:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:07.706 07:20:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:07.706 07:20:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:07.706 07:20:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:07.706 07:20:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:07.706 07:20:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.706 07:20:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:07.706 07:20:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:07.706 07:20:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:07.706 07:20:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:07.706 07:20:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:07.706 07:20:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:07.706 07:20:16 -- scripts/common.sh@344 -- # : 1 00:05:07.706 07:20:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:07.706 07:20:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.706 07:20:16 -- scripts/common.sh@364 -- # decimal 1 00:05:07.706 07:20:16 -- scripts/common.sh@352 -- # local d=1 00:05:07.706 07:20:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.706 07:20:16 -- scripts/common.sh@354 -- # echo 1 00:05:07.706 07:20:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:07.706 07:20:16 -- scripts/common.sh@365 -- # decimal 2 00:05:07.706 07:20:16 -- scripts/common.sh@352 -- # local d=2 00:05:07.706 07:20:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.706 07:20:16 -- scripts/common.sh@354 -- # echo 2 00:05:07.706 07:20:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:07.706 07:20:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:07.706 07:20:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:07.706 07:20:16 -- scripts/common.sh@367 -- # return 0 00:05:07.706 07:20:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.706 07:20:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 07:20:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 07:20:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 07:20:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.707 --rc geninfo_all_blocks=1 00:05:07.707 --rc geninfo_unexecuted_blocks=1 00:05:07.707 00:05:07.707 ' 00:05:07.707 07:20:16 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:07.707 07:20:16 -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.707 07:20:16 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.707 07:20:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:07.707 07:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.707 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.707 ************************************ 00:05:07.707 START TEST event_perf 00:05:07.707 ************************************ 00:05:07.707 07:20:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.707 Running I/O for 1 seconds...[2024-11-19 07:20:16.830983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.707 [2024-11-19 07:20:16.831116] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56905 ] 00:05:07.968 [2024-11-19 07:20:16.985116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.968 [2024-11-19 07:20:17.141365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.968 [2024-11-19 07:20:17.141683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.968 [2024-11-19 07:20:17.141875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.968 [2024-11-19 07:20:17.141920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.355 Running I/O for 1 seconds... 00:05:09.355 lcore 0: 159700 00:05:09.355 lcore 1: 159702 00:05:09.355 lcore 2: 159700 00:05:09.355 lcore 3: 159697 00:05:09.355 done. 00:05:09.355 ************************************ 00:05:09.355 END TEST event_perf 00:05:09.355 ************************************ 00:05:09.355 00:05:09.355 real 0m1.558s 00:05:09.355 user 0m4.337s 00:05:09.355 sys 0m0.102s 00:05:09.355 07:20:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.355 07:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.355 07:20:18 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.355 07:20:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:09.355 07:20:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.355 07:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.355 ************************************ 00:05:09.355 START TEST event_reactor 00:05:09.355 ************************************ 00:05:09.355 07:20:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.355 [2024-11-19 07:20:18.435266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.355 [2024-11-19 07:20:18.435373] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56950 ] 00:05:09.355 [2024-11-19 07:20:18.586243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.616 [2024-11-19 07:20:18.804974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.000 test_start 00:05:11.000 oneshot 00:05:11.000 tick 100 00:05:11.000 tick 100 00:05:11.000 tick 250 00:05:11.000 tick 100 00:05:11.000 tick 100 00:05:11.000 tick 250 00:05:11.000 tick 500 00:05:11.000 tick 100 00:05:11.000 tick 100 00:05:11.000 tick 100 00:05:11.000 tick 250 00:05:11.000 tick 100 00:05:11.000 tick 100 00:05:11.000 test_end 00:05:11.000 00:05:11.000 real 0m1.687s 00:05:11.000 user 0m1.486s 00:05:11.000 sys 0m0.087s 00:05:11.000 07:20:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.000 07:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:11.000 ************************************ 00:05:11.000 END TEST event_reactor 00:05:11.000 ************************************ 00:05:11.000 07:20:20 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.000 07:20:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:11.000 07:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.000 07:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:11.000 ************************************ 00:05:11.000 START TEST event_reactor_perf 00:05:11.000 ************************************ 00:05:11.000 07:20:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.000 [2024-11-19 07:20:20.177701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.000 [2024-11-19 07:20:20.177807] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56992 ] 00:05:11.258 [2024-11-19 07:20:20.326761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.258 [2024-11-19 07:20:20.507622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.632 test_start 00:05:12.632 test_end 00:05:12.632 Performance: 314104 events per second 00:05:12.632 00:05:12.632 real 0m1.622s 00:05:12.632 user 0m1.432s 00:05:12.632 sys 0m0.081s 00:05:12.632 ************************************ 00:05:12.632 END TEST event_reactor_perf 00:05:12.632 ************************************ 00:05:12.632 07:20:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.632 07:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 07:20:21 -- event/event.sh@49 -- # uname -s 00:05:12.632 07:20:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.632 07:20:21 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.632 07:20:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.632 07:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.632 07:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 ************************************ 00:05:12.632 START TEST event_scheduler 00:05:12.632 ************************************ 00:05:12.632 07:20:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.890 * Looking for test storage... 00:05:12.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:12.890 07:20:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.890 07:20:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.890 07:20:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.890 07:20:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.890 07:20:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.890 07:20:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.890 07:20:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.890 07:20:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.890 07:20:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.890 07:20:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.890 07:20:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.890 07:20:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.890 07:20:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.890 07:20:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.890 07:20:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.890 07:20:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.890 07:20:21 -- scripts/common.sh@344 -- # : 1 00:05:12.890 07:20:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.890 07:20:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.890 07:20:21 -- scripts/common.sh@364 -- # decimal 1 00:05:12.890 07:20:21 -- scripts/common.sh@352 -- # local d=1 00:05:12.890 07:20:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.890 07:20:21 -- scripts/common.sh@354 -- # echo 1 00:05:12.890 07:20:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.890 07:20:21 -- scripts/common.sh@365 -- # decimal 2 00:05:12.890 07:20:21 -- scripts/common.sh@352 -- # local d=2 00:05:12.890 07:20:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.890 07:20:21 -- scripts/common.sh@354 -- # echo 2 00:05:12.890 07:20:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.890 07:20:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.890 07:20:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.890 07:20:21 -- scripts/common.sh@367 -- # return 0 00:05:12.890 07:20:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.890 07:20:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.890 --rc genhtml_branch_coverage=1 00:05:12.890 --rc genhtml_function_coverage=1 00:05:12.890 --rc genhtml_legend=1 00:05:12.890 --rc geninfo_all_blocks=1 00:05:12.890 --rc geninfo_unexecuted_blocks=1 00:05:12.890 00:05:12.890 ' 00:05:12.891 07:20:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.891 --rc genhtml_branch_coverage=1 00:05:12.891 --rc genhtml_function_coverage=1 00:05:12.891 --rc genhtml_legend=1 00:05:12.891 --rc geninfo_all_blocks=1 00:05:12.891 --rc geninfo_unexecuted_blocks=1 00:05:12.891 00:05:12.891 ' 00:05:12.891 07:20:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.891 --rc genhtml_branch_coverage=1 00:05:12.891 --rc genhtml_function_coverage=1 00:05:12.891 --rc genhtml_legend=1 00:05:12.891 --rc geninfo_all_blocks=1 00:05:12.891 --rc geninfo_unexecuted_blocks=1 00:05:12.891 00:05:12.891 ' 00:05:12.891 07:20:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.891 --rc genhtml_branch_coverage=1 00:05:12.891 --rc genhtml_function_coverage=1 00:05:12.891 --rc genhtml_legend=1 00:05:12.891 --rc geninfo_all_blocks=1 00:05:12.891 --rc geninfo_unexecuted_blocks=1 00:05:12.891 00:05:12.891 ' 00:05:12.891 07:20:21 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.891 07:20:21 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57056 00:05:12.891 07:20:21 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.891 07:20:21 -- scheduler/scheduler.sh@37 -- # waitforlisten 57056 00:05:12.891 07:20:21 -- common/autotest_common.sh@829 -- # '[' -z 57056 ']' 00:05:12.891 07:20:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.891 07:20:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.891 07:20:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.891 07:20:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.891 07:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.891 07:20:21 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.891 [2024-11-19 07:20:22.031909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.891 [2024-11-19 07:20:22.032023] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57056 ] 00:05:13.149 [2024-11-19 07:20:22.182329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.149 [2024-11-19 07:20:22.366898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.149 [2024-11-19 07:20:22.367248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.149 [2024-11-19 07:20:22.367376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.149 [2024-11-19 07:20:22.367478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.715 07:20:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.715 07:20:22 -- common/autotest_common.sh@862 -- # return 0 00:05:13.715 07:20:22 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.715 07:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.715 07:20:22 -- common/autotest_common.sh@10 -- # set +x 00:05:13.715 POWER: Env isn't set yet! 00:05:13.715 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:13.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.715 POWER: Attempting to initialise PSTAT power management... 00:05:13.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.715 POWER: Cannot set governor of lcore 0 to performance 00:05:13.715 POWER: Attempting to initialise AMD PSTATE power management... 00:05:13.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.715 POWER: Attempting to initialise CPPC power management... 00:05:13.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.715 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.715 POWER: Attempting to initialise VM power management... 00:05:13.715 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:13.715 POWER: Unable to set Power Management Environment for lcore 0 00:05:13.715 [2024-11-19 07:20:22.852495] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:13.715 [2024-11-19 07:20:22.852510] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:13.715 [2024-11-19 07:20:22.852519] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:13.715 [2024-11-19 07:20:22.852533] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:13.715 [2024-11-19 07:20:22.852543] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:13.715 [2024-11-19 07:20:22.852550] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:13.715 07:20:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.715 07:20:22 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.715 07:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.715 07:20:22 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 [2024-11-19 07:20:23.076958] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.974 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.974 07:20:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.974 07:20:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 ************************************ 00:05:13.974 START TEST scheduler_create_thread 00:05:13.974 ************************************ 00:05:13.974 07:20:23 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.974 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 2 00:05:13.974 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.974 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 3 00:05:13.974 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.974 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 4 00:05:13.974 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.974 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 5 00:05:13.974 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.974 07:20:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.974 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.974 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 6 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 7 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 8 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 9 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 10 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 07:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.975 07:20:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.975 07:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.975 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 07:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.875 07:20:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.875 07:20:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.875 07:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.875 07:20:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.812 ************************************ 00:05:16.812 END TEST scheduler_create_thread 00:05:16.812 ************************************ 00:05:16.812 07:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.812 00:05:16.812 real 0m2.615s 00:05:16.812 user 0m0.013s 00:05:16.812 sys 0m0.008s 00:05:16.812 07:20:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.812 07:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:16.812 07:20:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.812 07:20:25 -- scheduler/scheduler.sh@46 -- # killprocess 57056 00:05:16.812 07:20:25 -- common/autotest_common.sh@936 -- # '[' -z 57056 ']' 00:05:16.812 07:20:25 -- common/autotest_common.sh@940 -- # kill -0 57056 00:05:16.812 07:20:25 -- common/autotest_common.sh@941 -- # uname 00:05:16.812 07:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.812 07:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57056 00:05:16.812 killing process with pid 57056 00:05:16.812 07:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:16.812 07:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:16.812 07:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57056' 00:05:16.813 07:20:25 -- common/autotest_common.sh@955 -- # kill 57056 00:05:16.813 07:20:25 -- common/autotest_common.sh@960 -- # wait 57056 00:05:17.071 [2024-11-19 07:20:26.191094] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.635 00:05:17.635 real 0m5.015s 00:05:17.635 user 0m8.455s 00:05:17.635 sys 0m0.342s 00:05:17.635 07:20:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.635 ************************************ 00:05:17.635 END TEST event_scheduler 00:05:17.635 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.635 ************************************ 00:05:17.895 07:20:26 -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.895 07:20:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.896 07:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.896 07:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.896 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.896 ************************************ 00:05:17.896 START TEST app_repeat 00:05:17.896 ************************************ 00:05:17.896 07:20:26 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:17.896 07:20:26 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.896 07:20:26 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.896 07:20:26 -- event/event.sh@13 -- # local nbd_list 00:05:17.896 07:20:26 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.896 07:20:26 -- event/event.sh@14 -- # local bdev_list 00:05:17.896 07:20:26 -- event/event.sh@15 -- # local repeat_times=4 00:05:17.896 07:20:26 -- event/event.sh@17 -- # modprobe nbd 00:05:17.896 Process app_repeat pid: 57162 00:05:17.896 spdk_app_start Round 0 00:05:17.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.896 07:20:26 -- event/event.sh@19 -- # repeat_pid=57162 00:05:17.896 07:20:26 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.896 07:20:26 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57162' 00:05:17.896 07:20:26 -- event/event.sh@23 -- # for i in {0..2} 00:05:17.896 07:20:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.896 07:20:26 -- event/event.sh@25 -- # waitforlisten 57162 /var/tmp/spdk-nbd.sock 00:05:17.896 07:20:26 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.896 07:20:26 -- common/autotest_common.sh@829 -- # '[' -z 57162 ']' 00:05:17.896 07:20:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.896 07:20:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.896 07:20:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.896 07:20:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.896 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.896 [2024-11-19 07:20:26.962207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.896 [2024-11-19 07:20:26.962313] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57162 ] 00:05:17.896 [2024-11-19 07:20:27.111915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.154 [2024-11-19 07:20:27.292575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.154 [2024-11-19 07:20:27.292661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.718 07:20:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.718 07:20:27 -- common/autotest_common.sh@862 -- # return 0 00:05:18.718 07:20:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.976 Malloc0 00:05:18.976 07:20:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.235 Malloc1 00:05:19.235 07:20:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@12 -- # local i 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.235 /dev/nbd0 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.235 07:20:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.235 07:20:28 -- common/autotest_common.sh@867 -- # local i 00:05:19.235 07:20:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.235 07:20:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.235 07:20:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.235 07:20:28 -- common/autotest_common.sh@871 -- # break 00:05:19.235 07:20:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.235 07:20:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.235 07:20:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.235 1+0 records in 00:05:19.235 1+0 records out 00:05:19.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780043 s, 5.3 MB/s 00:05:19.235 07:20:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.235 07:20:28 -- common/autotest_common.sh@884 -- # size=4096 00:05:19.235 07:20:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.235 07:20:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.235 07:20:28 -- common/autotest_common.sh@887 -- # return 0 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.235 07:20:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.494 /dev/nbd1 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.494 07:20:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.494 07:20:28 -- common/autotest_common.sh@867 -- # local i 00:05:19.494 07:20:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.494 07:20:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.494 07:20:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.494 07:20:28 -- common/autotest_common.sh@871 -- # break 00:05:19.494 07:20:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.494 07:20:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.494 07:20:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.494 1+0 records in 00:05:19.494 1+0 records out 00:05:19.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258234 s, 15.9 MB/s 00:05:19.494 07:20:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.494 07:20:28 -- common/autotest_common.sh@884 -- # size=4096 00:05:19.494 07:20:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.494 07:20:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.494 07:20:28 -- common/autotest_common.sh@887 -- # return 0 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.494 07:20:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.751 { 00:05:19.751 "nbd_device": "/dev/nbd0", 00:05:19.751 "bdev_name": "Malloc0" 00:05:19.751 }, 00:05:19.751 { 00:05:19.751 "nbd_device": "/dev/nbd1", 00:05:19.751 "bdev_name": "Malloc1" 00:05:19.751 } 00:05:19.751 ]' 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.751 { 00:05:19.751 "nbd_device": "/dev/nbd0", 00:05:19.751 "bdev_name": "Malloc0" 00:05:19.751 }, 00:05:19.751 { 00:05:19.751 "nbd_device": "/dev/nbd1", 00:05:19.751 "bdev_name": "Malloc1" 00:05:19.751 } 00:05:19.751 ]' 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.751 /dev/nbd1' 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.751 07:20:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.752 /dev/nbd1' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.752 256+0 records in 00:05:19.752 256+0 records out 00:05:19.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830255 s, 126 MB/s 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.752 256+0 records in 00:05:19.752 256+0 records out 00:05:19.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210898 s, 49.7 MB/s 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.752 256+0 records in 00:05:19.752 256+0 records out 00:05:19.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255251 s, 41.1 MB/s 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.752 07:20:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.752 07:20:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.752 07:20:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.752 07:20:29 -- bdev/nbd_common.sh@51 -- # local i 00:05:19.752 07:20:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.752 07:20:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@41 -- # break 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.010 07:20:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@41 -- # break 00:05:20.267 07:20:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.268 07:20:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.268 07:20:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.268 07:20:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@65 -- # true 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.548 07:20:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.548 07:20:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.841 07:20:29 -- event/event.sh@35 -- # sleep 3 00:05:21.412 [2024-11-19 07:20:30.616028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.671 [2024-11-19 07:20:30.746688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.671 [2024-11-19 07:20:30.746788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.671 [2024-11-19 07:20:30.850700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.671 [2024-11-19 07:20:30.850747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.205 spdk_app_start Round 1 00:05:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.205 07:20:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:24.205 07:20:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.205 07:20:32 -- event/event.sh@25 -- # waitforlisten 57162 /var/tmp/spdk-nbd.sock 00:05:24.205 07:20:32 -- common/autotest_common.sh@829 -- # '[' -z 57162 ']' 00:05:24.205 07:20:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.205 07:20:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.205 07:20:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.205 07:20:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.205 07:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.205 07:20:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.205 07:20:33 -- common/autotest_common.sh@862 -- # return 0 00:05:24.205 07:20:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.205 Malloc0 00:05:24.205 07:20:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.463 Malloc1 00:05:24.463 07:20:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@12 -- # local i 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.463 07:20:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.463 /dev/nbd0 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.722 07:20:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.722 07:20:33 -- common/autotest_common.sh@867 -- # local i 00:05:24.722 07:20:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.722 07:20:33 -- common/autotest_common.sh@871 -- # break 00:05:24.722 07:20:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.722 1+0 records in 00:05:24.722 1+0 records out 00:05:24.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188654 s, 21.7 MB/s 00:05:24.722 07:20:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.722 07:20:33 -- common/autotest_common.sh@884 -- # size=4096 00:05:24.722 07:20:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.722 07:20:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.722 07:20:33 -- common/autotest_common.sh@887 -- # return 0 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.722 /dev/nbd1 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.722 07:20:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.722 07:20:33 -- common/autotest_common.sh@867 -- # local i 00:05:24.722 07:20:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.722 07:20:33 -- common/autotest_common.sh@871 -- # break 00:05:24.722 07:20:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.722 07:20:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.722 1+0 records in 00:05:24.722 1+0 records out 00:05:24.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212227 s, 19.3 MB/s 00:05:24.722 07:20:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.722 07:20:33 -- common/autotest_common.sh@884 -- # size=4096 00:05:24.722 07:20:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.722 07:20:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.722 07:20:33 -- common/autotest_common.sh@887 -- # return 0 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.722 07:20:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.982 { 00:05:24.982 "nbd_device": "/dev/nbd0", 00:05:24.982 "bdev_name": "Malloc0" 00:05:24.982 }, 00:05:24.982 { 00:05:24.982 "nbd_device": "/dev/nbd1", 00:05:24.982 "bdev_name": "Malloc1" 00:05:24.982 } 00:05:24.982 ]' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.982 { 00:05:24.982 "nbd_device": "/dev/nbd0", 00:05:24.982 "bdev_name": "Malloc0" 00:05:24.982 }, 00:05:24.982 { 00:05:24.982 "nbd_device": "/dev/nbd1", 00:05:24.982 "bdev_name": "Malloc1" 00:05:24.982 } 00:05:24.982 ]' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.982 /dev/nbd1' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.982 /dev/nbd1' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.982 256+0 records in 00:05:24.982 256+0 records out 00:05:24.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805695 s, 130 MB/s 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.982 256+0 records in 00:05:24.982 256+0 records out 00:05:24.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147096 s, 71.3 MB/s 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.982 07:20:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.982 256+0 records in 00:05:24.982 256+0 records out 00:05:24.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205521 s, 51.0 MB/s 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@51 -- # local i 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@41 -- # break 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.241 07:20:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.500 07:20:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.500 07:20:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.500 07:20:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.500 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@41 -- # break 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.501 07:20:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@65 -- # true 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.760 07:20:34 -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.760 07:20:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.019 07:20:35 -- event/event.sh@35 -- # sleep 3 00:05:26.586 [2024-11-19 07:20:35.764996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.845 [2024-11-19 07:20:35.892672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.845 [2024-11-19 07:20:35.892763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.845 [2024-11-19 07:20:35.996647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.845 [2024-11-19 07:20:35.996709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.377 spdk_app_start Round 2 00:05:29.377 07:20:38 -- event/event.sh@23 -- # for i in {0..2} 00:05:29.377 07:20:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.377 07:20:38 -- event/event.sh@25 -- # waitforlisten 57162 /var/tmp/spdk-nbd.sock 00:05:29.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.377 07:20:38 -- common/autotest_common.sh@829 -- # '[' -z 57162 ']' 00:05:29.377 07:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.377 07:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.377 07:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.377 07:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.377 07:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:29.377 07:20:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.377 07:20:38 -- common/autotest_common.sh@862 -- # return 0 00:05:29.377 07:20:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.377 Malloc0 00:05:29.377 07:20:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.635 Malloc1 00:05:29.635 07:20:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.635 07:20:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.635 07:20:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.635 07:20:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.635 07:20:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.635 07:20:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@12 -- # local i 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.636 07:20:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.894 /dev/nbd0 00:05:29.894 07:20:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.894 07:20:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.894 07:20:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.894 07:20:38 -- common/autotest_common.sh@867 -- # local i 00:05:29.894 07:20:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.894 07:20:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.894 07:20:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.894 07:20:38 -- common/autotest_common.sh@871 -- # break 00:05:29.894 07:20:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.894 07:20:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.894 07:20:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.894 1+0 records in 00:05:29.894 1+0 records out 00:05:29.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458882 s, 8.9 MB/s 00:05:29.894 07:20:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.894 07:20:38 -- common/autotest_common.sh@884 -- # size=4096 00:05:29.894 07:20:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.894 07:20:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.894 07:20:38 -- common/autotest_common.sh@887 -- # return 0 00:05:29.894 07:20:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.894 07:20:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.894 07:20:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.894 /dev/nbd1 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.154 07:20:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.154 07:20:39 -- common/autotest_common.sh@867 -- # local i 00:05:30.154 07:20:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.154 07:20:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.154 07:20:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.154 07:20:39 -- common/autotest_common.sh@871 -- # break 00:05:30.154 07:20:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.154 07:20:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.154 07:20:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.154 1+0 records in 00:05:30.154 1+0 records out 00:05:30.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206874 s, 19.8 MB/s 00:05:30.154 07:20:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.154 07:20:39 -- common/autotest_common.sh@884 -- # size=4096 00:05:30.154 07:20:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.154 07:20:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.154 07:20:39 -- common/autotest_common.sh@887 -- # return 0 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.154 { 00:05:30.154 "nbd_device": "/dev/nbd0", 00:05:30.154 "bdev_name": "Malloc0" 00:05:30.154 }, 00:05:30.154 { 00:05:30.154 "nbd_device": "/dev/nbd1", 00:05:30.154 "bdev_name": "Malloc1" 00:05:30.154 } 00:05:30.154 ]' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.154 { 00:05:30.154 "nbd_device": "/dev/nbd0", 00:05:30.154 "bdev_name": "Malloc0" 00:05:30.154 }, 00:05:30.154 { 00:05:30.154 "nbd_device": "/dev/nbd1", 00:05:30.154 "bdev_name": "Malloc1" 00:05:30.154 } 00:05:30.154 ]' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.154 /dev/nbd1' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.154 /dev/nbd1' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.154 256+0 records in 00:05:30.154 256+0 records out 00:05:30.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00809556 s, 130 MB/s 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.154 07:20:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.413 256+0 records in 00:05:30.413 256+0 records out 00:05:30.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173294 s, 60.5 MB/s 00:05:30.413 07:20:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.413 07:20:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.413 256+0 records in 00:05:30.413 256+0 records out 00:05:30.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181173 s, 57.9 MB/s 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@51 -- # local i 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@41 -- # break 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.414 07:20:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@41 -- # break 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.672 07:20:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@65 -- # true 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.930 07:20:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.930 07:20:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.188 07:20:40 -- event/event.sh@35 -- # sleep 3 00:05:31.755 [2024-11-19 07:20:40.963984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.013 [2024-11-19 07:20:41.094201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.013 [2024-11-19 07:20:41.094232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.013 [2024-11-19 07:20:41.197761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.013 [2024-11-19 07:20:41.197977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.545 07:20:43 -- event/event.sh@38 -- # waitforlisten 57162 /var/tmp/spdk-nbd.sock 00:05:34.545 07:20:43 -- common/autotest_common.sh@829 -- # '[' -z 57162 ']' 00:05:34.545 07:20:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.545 07:20:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.545 07:20:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.545 07:20:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.545 07:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:34.545 07:20:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.545 07:20:43 -- common/autotest_common.sh@862 -- # return 0 00:05:34.545 07:20:43 -- event/event.sh@39 -- # killprocess 57162 00:05:34.545 07:20:43 -- common/autotest_common.sh@936 -- # '[' -z 57162 ']' 00:05:34.545 07:20:43 -- common/autotest_common.sh@940 -- # kill -0 57162 00:05:34.545 07:20:43 -- common/autotest_common.sh@941 -- # uname 00:05:34.545 07:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.545 07:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57162 00:05:34.545 killing process with pid 57162 00:05:34.545 07:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.545 07:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.545 07:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57162' 00:05:34.545 07:20:43 -- common/autotest_common.sh@955 -- # kill 57162 00:05:34.545 07:20:43 -- common/autotest_common.sh@960 -- # wait 57162 00:05:35.113 spdk_app_start is called in Round 0. 00:05:35.113 Shutdown signal received, stop current app iteration 00:05:35.113 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:35.113 spdk_app_start is called in Round 1. 00:05:35.113 Shutdown signal received, stop current app iteration 00:05:35.113 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:35.113 spdk_app_start is called in Round 2. 00:05:35.113 Shutdown signal received, stop current app iteration 00:05:35.113 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:35.113 spdk_app_start is called in Round 3. 00:05:35.113 Shutdown signal received, stop current app iteration 00:05:35.113 07:20:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.113 07:20:44 -- event/event.sh@42 -- # return 0 00:05:35.113 00:05:35.113 real 0m17.233s 00:05:35.113 user 0m36.949s 00:05:35.113 sys 0m1.932s 00:05:35.113 07:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.113 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.113 ************************************ 00:05:35.113 END TEST app_repeat 00:05:35.113 ************************************ 00:05:35.113 07:20:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.113 07:20:44 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.113 07:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.113 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.113 ************************************ 00:05:35.113 START TEST cpu_locks 00:05:35.113 ************************************ 00:05:35.113 07:20:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.113 * Looking for test storage... 00:05:35.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.113 07:20:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.113 07:20:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.113 07:20:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.113 07:20:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.113 07:20:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.113 07:20:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.113 07:20:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.113 07:20:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.113 07:20:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.113 07:20:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.113 07:20:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.113 07:20:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.113 07:20:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.113 07:20:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.113 07:20:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.113 07:20:44 -- scripts/common.sh@344 -- # : 1 00:05:35.113 07:20:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.113 07:20:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.113 07:20:44 -- scripts/common.sh@364 -- # decimal 1 00:05:35.113 07:20:44 -- scripts/common.sh@352 -- # local d=1 00:05:35.113 07:20:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.113 07:20:44 -- scripts/common.sh@354 -- # echo 1 00:05:35.113 07:20:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.113 07:20:44 -- scripts/common.sh@365 -- # decimal 2 00:05:35.113 07:20:44 -- scripts/common.sh@352 -- # local d=2 00:05:35.113 07:20:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.113 07:20:44 -- scripts/common.sh@354 -- # echo 2 00:05:35.113 07:20:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.113 07:20:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.113 07:20:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.113 07:20:44 -- scripts/common.sh@367 -- # return 0 00:05:35.113 07:20:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.113 --rc genhtml_branch_coverage=1 00:05:35.113 --rc genhtml_function_coverage=1 00:05:35.113 --rc genhtml_legend=1 00:05:35.113 --rc geninfo_all_blocks=1 00:05:35.113 --rc geninfo_unexecuted_blocks=1 00:05:35.113 00:05:35.113 ' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.113 --rc genhtml_branch_coverage=1 00:05:35.113 --rc genhtml_function_coverage=1 00:05:35.113 --rc genhtml_legend=1 00:05:35.113 --rc geninfo_all_blocks=1 00:05:35.113 --rc geninfo_unexecuted_blocks=1 00:05:35.113 00:05:35.113 ' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.113 --rc genhtml_branch_coverage=1 00:05:35.113 --rc genhtml_function_coverage=1 00:05:35.113 --rc genhtml_legend=1 00:05:35.113 --rc geninfo_all_blocks=1 00:05:35.113 --rc geninfo_unexecuted_blocks=1 00:05:35.113 00:05:35.113 ' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.113 --rc genhtml_branch_coverage=1 00:05:35.113 --rc genhtml_function_coverage=1 00:05:35.113 --rc genhtml_legend=1 00:05:35.113 --rc geninfo_all_blocks=1 00:05:35.113 --rc geninfo_unexecuted_blocks=1 00:05:35.113 00:05:35.113 ' 00:05:35.113 07:20:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.113 07:20:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.113 07:20:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.113 07:20:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.113 07:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.113 07:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.113 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.113 ************************************ 00:05:35.113 START TEST default_locks 00:05:35.113 ************************************ 00:05:35.113 07:20:44 -- common/autotest_common.sh@1114 -- # default_locks 00:05:35.113 07:20:44 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57586 00:05:35.113 07:20:44 -- event/cpu_locks.sh@47 -- # waitforlisten 57586 00:05:35.113 07:20:44 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.113 07:20:44 -- common/autotest_common.sh@829 -- # '[' -z 57586 ']' 00:05:35.113 07:20:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.113 07:20:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.113 07:20:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.113 07:20:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.113 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.385 [2024-11-19 07:20:44.406074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.385 [2024-11-19 07:20:44.406329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57586 ] 00:05:35.385 [2024-11-19 07:20:44.551814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.654 [2024-11-19 07:20:44.691500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.654 [2024-11-19 07:20:44.691779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.220 07:20:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.220 07:20:45 -- common/autotest_common.sh@862 -- # return 0 00:05:36.220 07:20:45 -- event/cpu_locks.sh@49 -- # locks_exist 57586 00:05:36.220 07:20:45 -- event/cpu_locks.sh@22 -- # lslocks -p 57586 00:05:36.220 07:20:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.220 07:20:45 -- event/cpu_locks.sh@50 -- # killprocess 57586 00:05:36.220 07:20:45 -- common/autotest_common.sh@936 -- # '[' -z 57586 ']' 00:05:36.220 07:20:45 -- common/autotest_common.sh@940 -- # kill -0 57586 00:05:36.220 07:20:45 -- common/autotest_common.sh@941 -- # uname 00:05:36.220 07:20:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.220 07:20:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57586 00:05:36.220 killing process with pid 57586 00:05:36.220 07:20:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.220 07:20:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.220 07:20:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57586' 00:05:36.220 07:20:45 -- common/autotest_common.sh@955 -- # kill 57586 00:05:36.220 07:20:45 -- common/autotest_common.sh@960 -- # wait 57586 00:05:37.595 07:20:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57586 00:05:37.595 07:20:46 -- common/autotest_common.sh@650 -- # local es=0 00:05:37.595 07:20:46 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57586 00:05:37.595 07:20:46 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.595 07:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.595 07:20:46 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.595 07:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.595 07:20:46 -- common/autotest_common.sh@653 -- # waitforlisten 57586 00:05:37.595 07:20:46 -- common/autotest_common.sh@829 -- # '[' -z 57586 ']' 00:05:37.595 07:20:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.595 07:20:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.595 ERROR: process (pid: 57586) is no longer running 00:05:37.595 07:20:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.595 07:20:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.595 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57586) - No such process 00:05:37.595 07:20:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.595 07:20:46 -- common/autotest_common.sh@862 -- # return 1 00:05:37.595 07:20:46 -- common/autotest_common.sh@653 -- # es=1 00:05:37.595 07:20:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.595 07:20:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.595 07:20:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.595 07:20:46 -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.595 07:20:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.595 07:20:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.595 07:20:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.595 ************************************ 00:05:37.595 END TEST default_locks 00:05:37.595 ************************************ 00:05:37.595 00:05:37.595 real 0m2.248s 00:05:37.595 user 0m2.248s 00:05:37.595 sys 0m0.403s 00:05:37.595 07:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.595 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 07:20:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.595 07:20:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.595 07:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.595 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 ************************************ 00:05:37.595 START TEST default_locks_via_rpc 00:05:37.595 ************************************ 00:05:37.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.595 07:20:46 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:37.595 07:20:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57645 00:05:37.595 07:20:46 -- event/cpu_locks.sh@63 -- # waitforlisten 57645 00:05:37.595 07:20:46 -- common/autotest_common.sh@829 -- # '[' -z 57645 ']' 00:05:37.595 07:20:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.595 07:20:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.595 07:20:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.595 07:20:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.595 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 07:20:46 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.595 [2024-11-19 07:20:46.690620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.595 [2024-11-19 07:20:46.691142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57645 ] 00:05:37.595 [2024-11-19 07:20:46.838468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.854 [2024-11-19 07:20:46.980409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.854 [2024-11-19 07:20:46.980725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.421 07:20:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.421 07:20:47 -- common/autotest_common.sh@862 -- # return 0 00:05:38.421 07:20:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.421 07:20:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.421 07:20:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.421 07:20:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.421 07:20:47 -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.421 07:20:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.421 07:20:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.421 07:20:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.421 07:20:47 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.421 07:20:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.421 07:20:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.421 07:20:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.421 07:20:47 -- event/cpu_locks.sh@71 -- # locks_exist 57645 00:05:38.421 07:20:47 -- event/cpu_locks.sh@22 -- # lslocks -p 57645 00:05:38.421 07:20:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.680 07:20:47 -- event/cpu_locks.sh@73 -- # killprocess 57645 00:05:38.680 07:20:47 -- common/autotest_common.sh@936 -- # '[' -z 57645 ']' 00:05:38.680 07:20:47 -- common/autotest_common.sh@940 -- # kill -0 57645 00:05:38.680 07:20:47 -- common/autotest_common.sh@941 -- # uname 00:05:38.680 07:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.680 07:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57645 00:05:38.680 killing process with pid 57645 00:05:38.680 07:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.680 07:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.680 07:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57645' 00:05:38.680 07:20:47 -- common/autotest_common.sh@955 -- # kill 57645 00:05:38.680 07:20:47 -- common/autotest_common.sh@960 -- # wait 57645 00:05:40.054 00:05:40.054 real 0m2.299s 00:05:40.054 user 0m2.336s 00:05:40.054 sys 0m0.397s 00:05:40.054 07:20:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.054 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.054 ************************************ 00:05:40.054 END TEST default_locks_via_rpc 00:05:40.054 ************************************ 00:05:40.054 07:20:48 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.054 07:20:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.054 07:20:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.054 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.054 ************************************ 00:05:40.054 START TEST non_locking_app_on_locked_coremask 00:05:40.054 ************************************ 00:05:40.054 07:20:48 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:40.054 07:20:48 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57697 00:05:40.054 07:20:48 -- event/cpu_locks.sh@81 -- # waitforlisten 57697 /var/tmp/spdk.sock 00:05:40.054 07:20:48 -- common/autotest_common.sh@829 -- # '[' -z 57697 ']' 00:05:40.054 07:20:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.054 07:20:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.054 07:20:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.054 07:20:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.054 07:20:48 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.054 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.054 [2024-11-19 07:20:49.031530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.054 [2024-11-19 07:20:49.031641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:05:40.054 [2024-11-19 07:20:49.180068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.312 [2024-11-19 07:20:49.322554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.312 [2024-11-19 07:20:49.322713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.878 07:20:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.878 07:20:49 -- common/autotest_common.sh@862 -- # return 0 00:05:40.878 07:20:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57713 00:05:40.878 07:20:49 -- event/cpu_locks.sh@85 -- # waitforlisten 57713 /var/tmp/spdk2.sock 00:05:40.878 07:20:49 -- common/autotest_common.sh@829 -- # '[' -z 57713 ']' 00:05:40.878 07:20:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.878 07:20:49 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.878 07:20:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.878 07:20:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.878 07:20:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.878 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:40.878 [2024-11-19 07:20:49.909770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.878 [2024-11-19 07:20:49.910051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57713 ] 00:05:40.878 [2024-11-19 07:20:50.054773] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.878 [2024-11-19 07:20:50.054819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.139 [2024-11-19 07:20:50.351571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.139 [2024-11-19 07:20:50.351728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.523 07:20:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.523 07:20:51 -- common/autotest_common.sh@862 -- # return 0 00:05:42.523 07:20:51 -- event/cpu_locks.sh@87 -- # locks_exist 57697 00:05:42.523 07:20:51 -- event/cpu_locks.sh@22 -- # lslocks -p 57697 00:05:42.523 07:20:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.523 07:20:51 -- event/cpu_locks.sh@89 -- # killprocess 57697 00:05:42.523 07:20:51 -- common/autotest_common.sh@936 -- # '[' -z 57697 ']' 00:05:42.523 07:20:51 -- common/autotest_common.sh@940 -- # kill -0 57697 00:05:42.523 07:20:51 -- common/autotest_common.sh@941 -- # uname 00:05:42.781 07:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.781 07:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57697 00:05:42.781 killing process with pid 57697 00:05:42.781 07:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.781 07:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.781 07:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57697' 00:05:42.781 07:20:51 -- common/autotest_common.sh@955 -- # kill 57697 00:05:42.781 07:20:51 -- common/autotest_common.sh@960 -- # wait 57697 00:05:45.314 07:20:54 -- event/cpu_locks.sh@90 -- # killprocess 57713 00:05:45.314 07:20:54 -- common/autotest_common.sh@936 -- # '[' -z 57713 ']' 00:05:45.314 07:20:54 -- common/autotest_common.sh@940 -- # kill -0 57713 00:05:45.314 07:20:54 -- common/autotest_common.sh@941 -- # uname 00:05:45.314 07:20:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.314 07:20:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57713 00:05:45.314 killing process with pid 57713 00:05:45.314 07:20:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.314 07:20:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.314 07:20:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57713' 00:05:45.314 07:20:54 -- common/autotest_common.sh@955 -- # kill 57713 00:05:45.314 07:20:54 -- common/autotest_common.sh@960 -- # wait 57713 00:05:46.250 ************************************ 00:05:46.250 END TEST non_locking_app_on_locked_coremask 00:05:46.250 ************************************ 00:05:46.250 00:05:46.250 real 0m6.372s 00:05:46.250 user 0m6.750s 00:05:46.250 sys 0m0.828s 00:05:46.250 07:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.250 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.250 07:20:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.250 07:20:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.250 07:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.250 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.250 ************************************ 00:05:46.250 START TEST locking_app_on_unlocked_coremask 00:05:46.250 ************************************ 00:05:46.250 07:20:55 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:46.250 07:20:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57811 00:05:46.250 07:20:55 -- event/cpu_locks.sh@99 -- # waitforlisten 57811 /var/tmp/spdk.sock 00:05:46.250 07:20:55 -- common/autotest_common.sh@829 -- # '[' -z 57811 ']' 00:05:46.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.250 07:20:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.250 07:20:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.250 07:20:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.250 07:20:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.250 07:20:55 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.250 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.250 [2024-11-19 07:20:55.441560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.250 [2024-11-19 07:20:55.441670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57811 ] 00:05:46.508 [2024-11-19 07:20:55.588263] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.508 [2024-11-19 07:20:55.588303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.508 [2024-11-19 07:20:55.733104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.508 [2024-11-19 07:20:55.733273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.081 07:20:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.081 07:20:56 -- common/autotest_common.sh@862 -- # return 0 00:05:47.081 07:20:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57822 00:05:47.081 07:20:56 -- event/cpu_locks.sh@103 -- # waitforlisten 57822 /var/tmp/spdk2.sock 00:05:47.081 07:20:56 -- common/autotest_common.sh@829 -- # '[' -z 57822 ']' 00:05:47.081 07:20:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.081 07:20:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.081 07:20:56 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.081 07:20:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.081 07:20:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.081 07:20:56 -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 [2024-11-19 07:20:56.303716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.081 [2024-11-19 07:20:56.304049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57822 ] 00:05:47.342 [2024-11-19 07:20:56.453272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.601 [2024-11-19 07:20:56.741773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.601 [2024-11-19 07:20:56.741959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.979 07:20:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.979 07:20:57 -- common/autotest_common.sh@862 -- # return 0 00:05:48.979 07:20:57 -- event/cpu_locks.sh@105 -- # locks_exist 57822 00:05:48.979 07:20:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57822 00:05:48.979 07:20:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.979 07:20:58 -- event/cpu_locks.sh@107 -- # killprocess 57811 00:05:48.979 07:20:58 -- common/autotest_common.sh@936 -- # '[' -z 57811 ']' 00:05:48.979 07:20:58 -- common/autotest_common.sh@940 -- # kill -0 57811 00:05:48.979 07:20:58 -- common/autotest_common.sh@941 -- # uname 00:05:48.979 07:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.979 07:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57811 00:05:48.979 killing process with pid 57811 00:05:48.979 07:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.979 07:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.979 07:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57811' 00:05:48.979 07:20:58 -- common/autotest_common.sh@955 -- # kill 57811 00:05:48.979 07:20:58 -- common/autotest_common.sh@960 -- # wait 57811 00:05:51.589 07:21:00 -- event/cpu_locks.sh@108 -- # killprocess 57822 00:05:51.590 07:21:00 -- common/autotest_common.sh@936 -- # '[' -z 57822 ']' 00:05:51.590 07:21:00 -- common/autotest_common.sh@940 -- # kill -0 57822 00:05:51.590 07:21:00 -- common/autotest_common.sh@941 -- # uname 00:05:51.590 07:21:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.590 07:21:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57822 00:05:51.590 killing process with pid 57822 00:05:51.590 07:21:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.590 07:21:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.590 07:21:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57822' 00:05:51.590 07:21:00 -- common/autotest_common.sh@955 -- # kill 57822 00:05:51.590 07:21:00 -- common/autotest_common.sh@960 -- # wait 57822 00:05:52.524 ************************************ 00:05:52.524 END TEST locking_app_on_unlocked_coremask 00:05:52.524 ************************************ 00:05:52.524 00:05:52.524 real 0m6.277s 00:05:52.524 user 0m6.607s 00:05:52.524 sys 0m0.843s 00:05:52.524 07:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.524 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 07:21:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.524 07:21:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.524 07:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.524 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 ************************************ 00:05:52.524 START TEST locking_app_on_locked_coremask 00:05:52.524 ************************************ 00:05:52.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.524 07:21:01 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:52.524 07:21:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57920 00:05:52.524 07:21:01 -- event/cpu_locks.sh@116 -- # waitforlisten 57920 /var/tmp/spdk.sock 00:05:52.524 07:21:01 -- common/autotest_common.sh@829 -- # '[' -z 57920 ']' 00:05:52.524 07:21:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.524 07:21:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.524 07:21:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.524 07:21:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.524 07:21:01 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.524 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 [2024-11-19 07:21:01.757848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.524 [2024-11-19 07:21:01.757963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:05:52.784 [2024-11-19 07:21:01.905951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.045 [2024-11-19 07:21:02.082600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.045 [2024-11-19 07:21:02.082826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.428 07:21:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.428 07:21:03 -- common/autotest_common.sh@862 -- # return 0 00:05:54.428 07:21:03 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57944 00:05:54.428 07:21:03 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57944 /var/tmp/spdk2.sock 00:05:54.428 07:21:03 -- common/autotest_common.sh@650 -- # local es=0 00:05:54.428 07:21:03 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57944 /var/tmp/spdk2.sock 00:05:54.428 07:21:03 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.428 07:21:03 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.428 07:21:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.428 07:21:03 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.428 07:21:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.428 07:21:03 -- common/autotest_common.sh@653 -- # waitforlisten 57944 /var/tmp/spdk2.sock 00:05:54.428 07:21:03 -- common/autotest_common.sh@829 -- # '[' -z 57944 ']' 00:05:54.428 07:21:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.428 07:21:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.428 07:21:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.428 07:21:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.428 07:21:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.428 [2024-11-19 07:21:03.321521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.428 [2024-11-19 07:21:03.321641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:05:54.428 [2024-11-19 07:21:03.476375] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57920 has claimed it. 00:05:54.428 [2024-11-19 07:21:03.476428] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.704 ERROR: process (pid: 57944) is no longer running 00:05:54.704 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57944) - No such process 00:05:54.704 07:21:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.704 07:21:03 -- common/autotest_common.sh@862 -- # return 1 00:05:54.704 07:21:03 -- common/autotest_common.sh@653 -- # es=1 00:05:54.704 07:21:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.704 07:21:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.704 07:21:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.704 07:21:03 -- event/cpu_locks.sh@122 -- # locks_exist 57920 00:05:54.704 07:21:03 -- event/cpu_locks.sh@22 -- # lslocks -p 57920 00:05:54.704 07:21:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.965 07:21:04 -- event/cpu_locks.sh@124 -- # killprocess 57920 00:05:54.965 07:21:04 -- common/autotest_common.sh@936 -- # '[' -z 57920 ']' 00:05:54.965 07:21:04 -- common/autotest_common.sh@940 -- # kill -0 57920 00:05:54.965 07:21:04 -- common/autotest_common.sh@941 -- # uname 00:05:54.965 07:21:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.965 07:21:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57920 00:05:54.965 killing process with pid 57920 00:05:54.965 07:21:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.965 07:21:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.965 07:21:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57920' 00:05:54.965 07:21:04 -- common/autotest_common.sh@955 -- # kill 57920 00:05:54.965 07:21:04 -- common/autotest_common.sh@960 -- # wait 57920 00:05:56.351 ************************************ 00:05:56.351 END TEST locking_app_on_locked_coremask 00:05:56.351 ************************************ 00:05:56.351 00:05:56.351 real 0m3.580s 00:05:56.351 user 0m3.943s 00:05:56.351 sys 0m0.512s 00:05:56.351 07:21:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.351 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.351 07:21:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.351 07:21:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.351 07:21:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.351 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.351 ************************************ 00:05:56.351 START TEST locking_overlapped_coremask 00:05:56.351 ************************************ 00:05:56.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.351 07:21:05 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:56.351 07:21:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57997 00:05:56.351 07:21:05 -- event/cpu_locks.sh@133 -- # waitforlisten 57997 /var/tmp/spdk.sock 00:05:56.351 07:21:05 -- common/autotest_common.sh@829 -- # '[' -z 57997 ']' 00:05:56.351 07:21:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.351 07:21:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.351 07:21:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.351 07:21:05 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.351 07:21:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.351 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.351 [2024-11-19 07:21:05.375565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.351 [2024-11-19 07:21:05.375652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57997 ] 00:05:56.351 [2024-11-19 07:21:05.517287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.611 [2024-11-19 07:21:05.665619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.612 [2024-11-19 07:21:05.665992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.612 [2024-11-19 07:21:05.666259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.612 [2024-11-19 07:21:05.666276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.182 07:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.182 07:21:06 -- common/autotest_common.sh@862 -- # return 0 00:05:57.182 07:21:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58015 00:05:57.182 07:21:06 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.182 07:21:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58015 /var/tmp/spdk2.sock 00:05:57.182 07:21:06 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.182 07:21:06 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58015 /var/tmp/spdk2.sock 00:05:57.182 07:21:06 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:57.182 07:21:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.182 07:21:06 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:57.182 07:21:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.182 07:21:06 -- common/autotest_common.sh@653 -- # waitforlisten 58015 /var/tmp/spdk2.sock 00:05:57.182 07:21:06 -- common/autotest_common.sh@829 -- # '[' -z 58015 ']' 00:05:57.182 07:21:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.182 07:21:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.182 07:21:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.182 07:21:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.182 07:21:06 -- common/autotest_common.sh@10 -- # set +x 00:05:57.182 [2024-11-19 07:21:06.255732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.182 [2024-11-19 07:21:06.256014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58015 ] 00:05:57.182 [2024-11-19 07:21:06.411841] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57997 has claimed it. 00:05:57.182 [2024-11-19 07:21:06.411902] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.752 ERROR: process (pid: 58015) is no longer running 00:05:57.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58015) - No such process 00:05:57.752 07:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.752 07:21:06 -- common/autotest_common.sh@862 -- # return 1 00:05:57.752 07:21:06 -- common/autotest_common.sh@653 -- # es=1 00:05:57.752 07:21:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.752 07:21:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.752 07:21:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.752 07:21:06 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.752 07:21:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.752 07:21:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.752 07:21:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.752 07:21:06 -- event/cpu_locks.sh@141 -- # killprocess 57997 00:05:57.752 07:21:06 -- common/autotest_common.sh@936 -- # '[' -z 57997 ']' 00:05:57.752 07:21:06 -- common/autotest_common.sh@940 -- # kill -0 57997 00:05:57.752 07:21:06 -- common/autotest_common.sh@941 -- # uname 00:05:57.752 07:21:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.752 07:21:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57997 00:05:57.752 07:21:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.752 07:21:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.752 killing process with pid 57997 00:05:57.752 07:21:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57997' 00:05:57.752 07:21:06 -- common/autotest_common.sh@955 -- # kill 57997 00:05:57.752 07:21:06 -- common/autotest_common.sh@960 -- # wait 57997 00:05:59.127 ************************************ 00:05:59.127 END TEST locking_overlapped_coremask 00:05:59.127 ************************************ 00:05:59.127 00:05:59.127 real 0m2.832s 00:05:59.127 user 0m7.441s 00:05:59.127 sys 0m0.386s 00:05:59.127 07:21:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.127 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 07:21:08 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.127 07:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.127 07:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.127 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 ************************************ 00:05:59.127 START TEST locking_overlapped_coremask_via_rpc 00:05:59.127 ************************************ 00:05:59.127 07:21:08 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:59.127 07:21:08 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58068 00:05:59.127 07:21:08 -- event/cpu_locks.sh@149 -- # waitforlisten 58068 /var/tmp/spdk.sock 00:05:59.127 07:21:08 -- common/autotest_common.sh@829 -- # '[' -z 58068 ']' 00:05:59.127 07:21:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.127 07:21:08 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.127 07:21:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.127 07:21:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.127 07:21:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.127 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 [2024-11-19 07:21:08.247700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.127 [2024-11-19 07:21:08.247787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58068 ] 00:05:59.385 [2024-11-19 07:21:08.389212] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.386 [2024-11-19 07:21:08.389411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.386 [2024-11-19 07:21:08.556251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.386 [2024-11-19 07:21:08.556551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.386 [2024-11-19 07:21:08.556872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.386 [2024-11-19 07:21:08.556893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.952 07:21:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.952 07:21:09 -- common/autotest_common.sh@862 -- # return 0 00:05:59.952 07:21:09 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58086 00:05:59.952 07:21:09 -- event/cpu_locks.sh@153 -- # waitforlisten 58086 /var/tmp/spdk2.sock 00:05:59.952 07:21:09 -- common/autotest_common.sh@829 -- # '[' -z 58086 ']' 00:05:59.952 07:21:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.952 07:21:09 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.952 07:21:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.952 07:21:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.952 07:21:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.952 07:21:09 -- common/autotest_common.sh@10 -- # set +x 00:05:59.952 [2024-11-19 07:21:09.127666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.952 [2024-11-19 07:21:09.127984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58086 ] 00:06:00.208 [2024-11-19 07:21:09.281125] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.208 [2024-11-19 07:21:09.281169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.464 [2024-11-19 07:21:09.640626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.465 [2024-11-19 07:21:09.641065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.465 [2024-11-19 07:21:09.644271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.465 [2024-11-19 07:21:09.644297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:02.368 07:21:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.368 07:21:11 -- common/autotest_common.sh@862 -- # return 0 00:06:02.368 07:21:11 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.368 07:21:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.368 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.368 07:21:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.368 07:21:11 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.368 07:21:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:02.368 07:21:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.368 07:21:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.368 07:21:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.368 07:21:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.368 07:21:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.368 07:21:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.368 07:21:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.368 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.368 [2024-11-19 07:21:11.408312] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58068 has claimed it. 00:06:02.368 request: 00:06:02.368 { 00:06:02.368 "method": "framework_enable_cpumask_locks", 00:06:02.368 "req_id": 1 00:06:02.368 } 00:06:02.368 Got JSON-RPC error response 00:06:02.368 response: 00:06:02.368 { 00:06:02.368 "code": -32603, 00:06:02.368 "message": "Failed to claim CPU core: 2" 00:06:02.368 } 00:06:02.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.368 07:21:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.368 07:21:11 -- common/autotest_common.sh@653 -- # es=1 00:06:02.368 07:21:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.368 07:21:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.368 07:21:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.368 07:21:11 -- event/cpu_locks.sh@158 -- # waitforlisten 58068 /var/tmp/spdk.sock 00:06:02.368 07:21:11 -- common/autotest_common.sh@829 -- # '[' -z 58068 ']' 00:06:02.368 07:21:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.368 07:21:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.368 07:21:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.368 07:21:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.368 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.368 07:21:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.368 07:21:11 -- common/autotest_common.sh@862 -- # return 0 00:06:02.368 07:21:11 -- event/cpu_locks.sh@159 -- # waitforlisten 58086 /var/tmp/spdk2.sock 00:06:02.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.368 07:21:11 -- common/autotest_common.sh@829 -- # '[' -z 58086 ']' 00:06:02.368 07:21:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.368 07:21:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.368 07:21:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.368 07:21:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.368 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.626 07:21:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.626 07:21:11 -- common/autotest_common.sh@862 -- # return 0 00:06:02.626 07:21:11 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.626 07:21:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.626 07:21:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.626 07:21:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.626 00:06:02.626 real 0m3.631s 00:06:02.626 user 0m1.399s 00:06:02.626 sys 0m0.158s 00:06:02.626 07:21:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.626 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.626 ************************************ 00:06:02.626 END TEST locking_overlapped_coremask_via_rpc 00:06:02.626 ************************************ 00:06:02.626 07:21:11 -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.626 07:21:11 -- event/cpu_locks.sh@15 -- # [[ -z 58068 ]] 00:06:02.626 07:21:11 -- event/cpu_locks.sh@15 -- # killprocess 58068 00:06:02.626 07:21:11 -- common/autotest_common.sh@936 -- # '[' -z 58068 ']' 00:06:02.626 07:21:11 -- common/autotest_common.sh@940 -- # kill -0 58068 00:06:02.626 07:21:11 -- common/autotest_common.sh@941 -- # uname 00:06:02.626 07:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.626 07:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58068 00:06:02.626 killing process with pid 58068 00:06:02.626 07:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.626 07:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.626 07:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58068' 00:06:02.626 07:21:11 -- common/autotest_common.sh@955 -- # kill 58068 00:06:02.626 07:21:11 -- common/autotest_common.sh@960 -- # wait 58068 00:06:04.007 07:21:13 -- event/cpu_locks.sh@16 -- # [[ -z 58086 ]] 00:06:04.007 07:21:13 -- event/cpu_locks.sh@16 -- # killprocess 58086 00:06:04.007 07:21:13 -- common/autotest_common.sh@936 -- # '[' -z 58086 ']' 00:06:04.007 07:21:13 -- common/autotest_common.sh@940 -- # kill -0 58086 00:06:04.007 07:21:13 -- common/autotest_common.sh@941 -- # uname 00:06:04.007 07:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.007 07:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58086 00:06:04.007 killing process with pid 58086 00:06:04.007 07:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:04.007 07:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:04.007 07:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58086' 00:06:04.007 07:21:13 -- common/autotest_common.sh@955 -- # kill 58086 00:06:04.007 07:21:13 -- common/autotest_common.sh@960 -- # wait 58086 00:06:05.393 07:21:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.393 07:21:14 -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.393 07:21:14 -- event/cpu_locks.sh@15 -- # [[ -z 58068 ]] 00:06:05.393 07:21:14 -- event/cpu_locks.sh@15 -- # killprocess 58068 00:06:05.393 07:21:14 -- common/autotest_common.sh@936 -- # '[' -z 58068 ']' 00:06:05.393 07:21:14 -- common/autotest_common.sh@940 -- # kill -0 58068 00:06:05.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58068) - No such process 00:06:05.393 Process with pid 58068 is not found 00:06:05.393 Process with pid 58086 is not found 00:06:05.393 07:21:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58068 is not found' 00:06:05.393 07:21:14 -- event/cpu_locks.sh@16 -- # [[ -z 58086 ]] 00:06:05.393 07:21:14 -- event/cpu_locks.sh@16 -- # killprocess 58086 00:06:05.393 07:21:14 -- common/autotest_common.sh@936 -- # '[' -z 58086 ']' 00:06:05.393 07:21:14 -- common/autotest_common.sh@940 -- # kill -0 58086 00:06:05.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58086) - No such process 00:06:05.393 07:21:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58086 is not found' 00:06:05.393 07:21:14 -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.393 ************************************ 00:06:05.393 END TEST cpu_locks 00:06:05.393 ************************************ 00:06:05.393 00:06:05.393 real 0m30.173s 00:06:05.393 user 0m54.314s 00:06:05.393 sys 0m4.333s 00:06:05.393 07:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.393 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:05.393 ************************************ 00:06:05.393 END TEST event 00:06:05.393 ************************************ 00:06:05.393 00:06:05.393 real 0m57.784s 00:06:05.393 user 1m47.133s 00:06:05.393 sys 0m7.121s 00:06:05.393 07:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.393 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:05.393 07:21:14 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.393 07:21:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.393 07:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.393 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:05.393 ************************************ 00:06:05.393 START TEST thread 00:06:05.393 ************************************ 00:06:05.393 07:21:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.393 * Looking for test storage... 00:06:05.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:05.394 07:21:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.394 07:21:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.394 07:21:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.394 07:21:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.394 07:21:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.394 07:21:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.394 07:21:14 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.394 07:21:14 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.394 07:21:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.394 07:21:14 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.394 07:21:14 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.394 07:21:14 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.394 07:21:14 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.394 07:21:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.394 07:21:14 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.394 07:21:14 -- scripts/common.sh@344 -- # : 1 00:06:05.394 07:21:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.394 07:21:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.394 07:21:14 -- scripts/common.sh@364 -- # decimal 1 00:06:05.394 07:21:14 -- scripts/common.sh@352 -- # local d=1 00:06:05.394 07:21:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.394 07:21:14 -- scripts/common.sh@354 -- # echo 1 00:06:05.394 07:21:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.394 07:21:14 -- scripts/common.sh@365 -- # decimal 2 00:06:05.394 07:21:14 -- scripts/common.sh@352 -- # local d=2 00:06:05.394 07:21:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.394 07:21:14 -- scripts/common.sh@354 -- # echo 2 00:06:05.394 07:21:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.394 07:21:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.394 07:21:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.394 07:21:14 -- scripts/common.sh@367 -- # return 0 00:06:05.394 07:21:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.394 --rc genhtml_branch_coverage=1 00:06:05.394 --rc genhtml_function_coverage=1 00:06:05.394 --rc genhtml_legend=1 00:06:05.394 --rc geninfo_all_blocks=1 00:06:05.394 --rc geninfo_unexecuted_blocks=1 00:06:05.394 00:06:05.394 ' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.394 --rc genhtml_branch_coverage=1 00:06:05.394 --rc genhtml_function_coverage=1 00:06:05.394 --rc genhtml_legend=1 00:06:05.394 --rc geninfo_all_blocks=1 00:06:05.394 --rc geninfo_unexecuted_blocks=1 00:06:05.394 00:06:05.394 ' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.394 --rc genhtml_branch_coverage=1 00:06:05.394 --rc genhtml_function_coverage=1 00:06:05.394 --rc genhtml_legend=1 00:06:05.394 --rc geninfo_all_blocks=1 00:06:05.394 --rc geninfo_unexecuted_blocks=1 00:06:05.394 00:06:05.394 ' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.394 --rc genhtml_branch_coverage=1 00:06:05.394 --rc genhtml_function_coverage=1 00:06:05.394 --rc genhtml_legend=1 00:06:05.394 --rc geninfo_all_blocks=1 00:06:05.394 --rc geninfo_unexecuted_blocks=1 00:06:05.394 00:06:05.394 ' 00:06:05.394 07:21:14 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.394 07:21:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:05.394 07:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.394 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:05.394 ************************************ 00:06:05.394 START TEST thread_poller_perf 00:06:05.394 ************************************ 00:06:05.394 07:21:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.653 [2024-11-19 07:21:14.659289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.653 [2024-11-19 07:21:14.659400] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58249 ] 00:06:05.653 [2024-11-19 07:21:14.806304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.912 [2024-11-19 07:21:14.959447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.912 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.287 [2024-11-19T07:21:16.537Z] ====================================== 00:06:07.287 [2024-11-19T07:21:16.537Z] busy:2611886252 (cyc) 00:06:07.287 [2024-11-19T07:21:16.537Z] total_run_count: 368000 00:06:07.287 [2024-11-19T07:21:16.537Z] tsc_hz: 2600000000 (cyc) 00:06:07.287 [2024-11-19T07:21:16.537Z] ====================================== 00:06:07.287 [2024-11-19T07:21:16.537Z] poller_cost: 7097 (cyc), 2729 (nsec) 00:06:07.287 00:06:07.287 real 0m1.553s 00:06:07.287 user 0m1.368s 00:06:07.287 sys 0m0.076s 00:06:07.287 07:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.287 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.287 ************************************ 00:06:07.287 END TEST thread_poller_perf 00:06:07.287 ************************************ 00:06:07.287 07:21:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.287 07:21:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:07.287 07:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.287 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.287 ************************************ 00:06:07.287 START TEST thread_poller_perf 00:06:07.287 ************************************ 00:06:07.287 07:21:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.287 [2024-11-19 07:21:16.246871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.288 [2024-11-19 07:21:16.247262] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58291 ] 00:06:07.288 [2024-11-19 07:21:16.400718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.546 [2024-11-19 07:21:16.553813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.546 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.920 [2024-11-19T07:21:18.170Z] ====================================== 00:06:08.920 [2024-11-19T07:21:18.170Z] busy:2603437250 (cyc) 00:06:08.920 [2024-11-19T07:21:18.170Z] total_run_count: 5186000 00:06:08.920 [2024-11-19T07:21:18.170Z] tsc_hz: 2600000000 (cyc) 00:06:08.920 [2024-11-19T07:21:18.170Z] ====================================== 00:06:08.920 [2024-11-19T07:21:18.170Z] poller_cost: 502 (cyc), 193 (nsec) 00:06:08.920 00:06:08.920 real 0m1.541s 00:06:08.920 user 0m1.353s 00:06:08.920 sys 0m0.079s 00:06:08.920 ************************************ 00:06:08.920 END TEST thread_poller_perf 00:06:08.920 ************************************ 00:06:08.920 07:21:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.920 07:21:17 -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 07:21:17 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.920 ************************************ 00:06:08.920 END TEST thread 00:06:08.920 ************************************ 00:06:08.920 00:06:08.920 real 0m3.339s 00:06:08.920 user 0m2.833s 00:06:08.920 sys 0m0.268s 00:06:08.920 07:21:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.920 07:21:17 -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 07:21:17 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:08.920 07:21:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.920 07:21:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.920 07:21:17 -- common/autotest_common.sh@10 -- # set +x 00:06:08.920 ************************************ 00:06:08.920 START TEST accel 00:06:08.920 ************************************ 00:06:08.920 07:21:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:08.920 * Looking for test storage... 00:06:08.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:08.920 07:21:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.920 07:21:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.920 07:21:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.920 07:21:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.920 07:21:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.920 07:21:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.920 07:21:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.920 07:21:18 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.920 07:21:18 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.920 07:21:18 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.920 07:21:18 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.920 07:21:18 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.920 07:21:18 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.920 07:21:18 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.920 07:21:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.920 07:21:18 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.920 07:21:18 -- scripts/common.sh@344 -- # : 1 00:06:08.920 07:21:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.920 07:21:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.920 07:21:18 -- scripts/common.sh@364 -- # decimal 1 00:06:08.920 07:21:18 -- scripts/common.sh@352 -- # local d=1 00:06:08.920 07:21:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.920 07:21:18 -- scripts/common.sh@354 -- # echo 1 00:06:08.920 07:21:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.920 07:21:18 -- scripts/common.sh@365 -- # decimal 2 00:06:08.920 07:21:18 -- scripts/common.sh@352 -- # local d=2 00:06:08.920 07:21:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.920 07:21:18 -- scripts/common.sh@354 -- # echo 2 00:06:08.920 07:21:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.920 07:21:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.920 07:21:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.920 07:21:18 -- scripts/common.sh@367 -- # return 0 00:06:08.921 07:21:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.921 07:21:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.921 --rc genhtml_branch_coverage=1 00:06:08.921 --rc genhtml_function_coverage=1 00:06:08.921 --rc genhtml_legend=1 00:06:08.921 --rc geninfo_all_blocks=1 00:06:08.921 --rc geninfo_unexecuted_blocks=1 00:06:08.921 00:06:08.921 ' 00:06:08.921 07:21:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.921 --rc genhtml_branch_coverage=1 00:06:08.921 --rc genhtml_function_coverage=1 00:06:08.921 --rc genhtml_legend=1 00:06:08.921 --rc geninfo_all_blocks=1 00:06:08.921 --rc geninfo_unexecuted_blocks=1 00:06:08.921 00:06:08.921 ' 00:06:08.921 07:21:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.921 --rc genhtml_branch_coverage=1 00:06:08.921 --rc genhtml_function_coverage=1 00:06:08.921 --rc genhtml_legend=1 00:06:08.921 --rc geninfo_all_blocks=1 00:06:08.921 --rc geninfo_unexecuted_blocks=1 00:06:08.921 00:06:08.921 ' 00:06:08.921 07:21:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.921 --rc genhtml_branch_coverage=1 00:06:08.921 --rc genhtml_function_coverage=1 00:06:08.921 --rc genhtml_legend=1 00:06:08.921 --rc geninfo_all_blocks=1 00:06:08.921 --rc geninfo_unexecuted_blocks=1 00:06:08.921 00:06:08.921 ' 00:06:08.921 07:21:18 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:08.921 07:21:18 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:08.921 07:21:18 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.921 07:21:18 -- accel/accel.sh@59 -- # spdk_tgt_pid=58374 00:06:08.921 07:21:18 -- accel/accel.sh@60 -- # waitforlisten 58374 00:06:08.921 07:21:18 -- common/autotest_common.sh@829 -- # '[' -z 58374 ']' 00:06:08.921 07:21:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.921 07:21:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.921 07:21:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.921 07:21:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.921 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:08.921 07:21:18 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:08.921 07:21:18 -- accel/accel.sh@58 -- # build_accel_config 00:06:08.921 07:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.921 07:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.921 07:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.921 07:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.921 07:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.921 07:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.921 07:21:18 -- accel/accel.sh@42 -- # jq -r . 00:06:08.921 [2024-11-19 07:21:18.108225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.921 [2024-11-19 07:21:18.108368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58374 ] 00:06:09.179 [2024-11-19 07:21:18.272367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.437 [2024-11-19 07:21:18.449953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.437 [2024-11-19 07:21:18.450156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.380 07:21:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.380 07:21:19 -- common/autotest_common.sh@862 -- # return 0 00:06:10.380 07:21:19 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:10.380 07:21:19 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:10.380 07:21:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.380 07:21:19 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:10.380 07:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 07:21:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.380 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.380 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.380 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # IFS== 00:06:10.381 07:21:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:10.381 07:21:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:10.381 07:21:19 -- accel/accel.sh@67 -- # killprocess 58374 00:06:10.381 07:21:19 -- common/autotest_common.sh@936 -- # '[' -z 58374 ']' 00:06:10.381 07:21:19 -- common/autotest_common.sh@940 -- # kill -0 58374 00:06:10.381 07:21:19 -- common/autotest_common.sh@941 -- # uname 00:06:10.381 07:21:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.381 07:21:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58374 00:06:10.642 killing process with pid 58374 00:06:10.642 07:21:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.642 07:21:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.642 07:21:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58374' 00:06:10.642 07:21:19 -- common/autotest_common.sh@955 -- # kill 58374 00:06:10.642 07:21:19 -- common/autotest_common.sh@960 -- # wait 58374 00:06:12.553 07:21:21 -- accel/accel.sh@68 -- # trap - ERR 00:06:12.553 07:21:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:12.553 07:21:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:12.553 07:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.553 07:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.553 07:21:21 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:12.553 07:21:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:12.553 07:21:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.553 07:21:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.553 07:21:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.553 07:21:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.553 07:21:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.553 07:21:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.553 07:21:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.553 07:21:21 -- accel/accel.sh@42 -- # jq -r . 00:06:12.554 07:21:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.554 07:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.554 07:21:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:12.554 07:21:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.554 07:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.554 07:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.554 ************************************ 00:06:12.554 START TEST accel_missing_filename 00:06:12.554 ************************************ 00:06:12.554 07:21:21 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:12.554 07:21:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:12.554 07:21:21 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:12.554 07:21:21 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:12.554 07:21:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.554 07:21:21 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:12.554 07:21:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.554 07:21:21 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:12.554 07:21:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:12.554 07:21:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.554 07:21:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.554 07:21:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.554 07:21:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.554 07:21:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.554 07:21:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.554 07:21:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.554 07:21:21 -- accel/accel.sh@42 -- # jq -r . 00:06:12.554 [2024-11-19 07:21:21.480889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.554 [2024-11-19 07:21:21.481008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58451 ] 00:06:12.554 [2024-11-19 07:21:21.630250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.811 [2024-11-19 07:21:21.810306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.811 [2024-11-19 07:21:21.951430] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.069 [2024-11-19 07:21:22.282237] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.327 A filename is required. 00:06:13.327 07:21:22 -- common/autotest_common.sh@653 -- # es=234 00:06:13.327 07:21:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.327 07:21:22 -- common/autotest_common.sh@662 -- # es=106 00:06:13.327 07:21:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.327 07:21:22 -- common/autotest_common.sh@670 -- # es=1 00:06:13.327 07:21:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.327 00:06:13.327 real 0m1.108s 00:06:13.327 user 0m0.904s 00:06:13.327 sys 0m0.125s 00:06:13.327 07:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.327 ************************************ 00:06:13.327 END TEST accel_missing_filename 00:06:13.327 ************************************ 00:06:13.327 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 07:21:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.585 07:21:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:13.585 07:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.585 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 ************************************ 00:06:13.585 START TEST accel_compress_verify 00:06:13.585 ************************************ 00:06:13.585 07:21:22 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.585 07:21:22 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.585 07:21:22 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.585 07:21:22 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.585 07:21:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.585 07:21:22 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.585 07:21:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.585 07:21:22 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.585 07:21:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.585 07:21:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.585 07:21:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.585 07:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.585 07:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.585 07:21:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.585 07:21:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.585 07:21:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.585 07:21:22 -- accel/accel.sh@42 -- # jq -r . 00:06:13.585 [2024-11-19 07:21:22.627774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.585 [2024-11-19 07:21:22.628089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58482 ] 00:06:13.585 [2024-11-19 07:21:22.778105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.842 [2024-11-19 07:21:22.958718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.099 [2024-11-19 07:21:23.100008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.356 [2024-11-19 07:21:23.433678] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:14.614 00:06:14.614 Compression does not support the verify option, aborting. 00:06:14.614 07:21:23 -- common/autotest_common.sh@653 -- # es=161 00:06:14.614 ************************************ 00:06:14.614 END TEST accel_compress_verify 00:06:14.614 ************************************ 00:06:14.614 07:21:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.614 07:21:23 -- common/autotest_common.sh@662 -- # es=33 00:06:14.614 07:21:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.614 07:21:23 -- common/autotest_common.sh@670 -- # es=1 00:06:14.614 07:21:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.614 00:06:14.614 real 0m1.108s 00:06:14.614 user 0m0.910s 00:06:14.614 sys 0m0.123s 00:06:14.614 07:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.614 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 07:21:23 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:14.614 07:21:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.614 07:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.614 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 ************************************ 00:06:14.614 START TEST accel_wrong_workload 00:06:14.614 ************************************ 00:06:14.614 07:21:23 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:14.614 07:21:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.614 07:21:23 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:14.614 07:21:23 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.614 07:21:23 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:14.614 07:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:14.614 07:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.614 07:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.614 07:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.614 07:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.614 07:21:23 -- accel/accel.sh@42 -- # jq -r . 00:06:14.614 Unsupported workload type: foobar 00:06:14.614 [2024-11-19 07:21:23.776330] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:14.614 accel_perf options: 00:06:14.614 [-h help message] 00:06:14.614 [-q queue depth per core] 00:06:14.614 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.614 [-T number of threads per core 00:06:14.614 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.614 [-t time in seconds] 00:06:14.614 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.614 [ dif_verify, , dif_generate, dif_generate_copy 00:06:14.614 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.614 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.614 [-S for crc32c workload, use this seed value (default 0) 00:06:14.614 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.614 [-f for fill workload, use this BYTE value (default 255) 00:06:14.614 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.614 [-y verify result if this switch is on] 00:06:14.614 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.614 Can be used to spread operations across a wider range of memory. 00:06:14.614 07:21:23 -- common/autotest_common.sh@653 -- # es=1 00:06:14.614 07:21:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.614 07:21:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.614 07:21:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.614 00:06:14.614 real 0m0.056s 00:06:14.614 user 0m0.050s 00:06:14.614 sys 0m0.035s 00:06:14.614 07:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.614 ************************************ 00:06:14.614 END TEST accel_wrong_workload 00:06:14.614 ************************************ 00:06:14.614 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 07:21:23 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.614 07:21:23 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:14.614 07:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.614 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.614 ************************************ 00:06:14.614 START TEST accel_negative_buffers 00:06:14.614 ************************************ 00:06:14.614 07:21:23 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.614 07:21:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.614 07:21:23 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.614 07:21:23 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.614 07:21:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.614 07:21:23 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.614 07:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.614 07:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.614 07:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.614 07:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.614 07:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.614 07:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.614 07:21:23 -- accel/accel.sh@42 -- # jq -r . 00:06:14.872 -x option must be non-negative. 00:06:14.872 [2024-11-19 07:21:23.874055] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.872 accel_perf options: 00:06:14.872 [-h help message] 00:06:14.872 [-q queue depth per core] 00:06:14.872 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.872 [-T number of threads per core 00:06:14.872 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.872 [-t time in seconds] 00:06:14.872 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.872 [ dif_verify, , dif_generate, dif_generate_copy 00:06:14.872 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.872 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.872 [-S for crc32c workload, use this seed value (default 0) 00:06:14.872 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.872 [-f for fill workload, use this BYTE value (default 255) 00:06:14.872 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.872 [-y verify result if this switch is on] 00:06:14.872 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.872 Can be used to spread operations across a wider range of memory. 00:06:14.872 07:21:23 -- common/autotest_common.sh@653 -- # es=1 00:06:14.872 07:21:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.872 07:21:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.872 07:21:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.872 ************************************ 00:06:14.872 END TEST accel_negative_buffers 00:06:14.872 ************************************ 00:06:14.872 00:06:14.872 real 0m0.053s 00:06:14.872 user 0m0.053s 00:06:14.872 sys 0m0.031s 00:06:14.872 07:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.872 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.872 07:21:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.872 07:21:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.872 07:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.872 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.872 ************************************ 00:06:14.872 START TEST accel_crc32c 00:06:14.872 ************************************ 00:06:14.872 07:21:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.872 07:21:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.872 07:21:23 -- accel/accel.sh@17 -- # local accel_module 00:06:14.872 07:21:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.872 07:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.872 07:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.872 07:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.872 07:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.872 07:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.872 07:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.872 07:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.872 07:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.872 07:21:23 -- accel/accel.sh@42 -- # jq -r . 00:06:14.872 [2024-11-19 07:21:23.964959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.872 [2024-11-19 07:21:23.965078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58549 ] 00:06:14.872 [2024-11-19 07:21:24.107340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.130 [2024-11-19 07:21:24.287276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.031 07:21:25 -- accel/accel.sh@18 -- # out=' 00:06:17.031 SPDK Configuration: 00:06:17.031 Core mask: 0x1 00:06:17.031 00:06:17.031 Accel Perf Configuration: 00:06:17.032 Workload Type: crc32c 00:06:17.032 CRC-32C seed: 32 00:06:17.032 Transfer size: 4096 bytes 00:06:17.032 Vector count 1 00:06:17.032 Module: software 00:06:17.032 Queue depth: 32 00:06:17.032 Allocate depth: 32 00:06:17.032 # threads/core: 1 00:06:17.032 Run time: 1 seconds 00:06:17.032 Verify: Yes 00:06:17.032 00:06:17.032 Running for 1 seconds... 00:06:17.032 00:06:17.032 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.032 ------------------------------------------------------------------------------------ 00:06:17.032 0,0 458880/s 1792 MiB/s 0 0 00:06:17.032 ==================================================================================== 00:06:17.032 Total 458880/s 1792 MiB/s 0 0' 00:06:17.032 07:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:17.032 07:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:17.032 07:21:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.032 07:21:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.032 07:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.032 07:21:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.032 07:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.032 07:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.032 07:21:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.032 07:21:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.032 07:21:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.032 07:21:25 -- accel/accel.sh@42 -- # jq -r . 00:06:17.032 [2024-11-19 07:21:25.990670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.032 [2024-11-19 07:21:25.990775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58575 ] 00:06:17.032 [2024-11-19 07:21:26.136857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.032 [2024-11-19 07:21:26.277326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=0x1 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=crc32c 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=32 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=software 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=32 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=32 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=1 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val=Yes 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:17.290 07:21:26 -- accel/accel.sh@21 -- # val= 00:06:17.290 07:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:17.290 07:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:18.665 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@21 -- # val= 00:06:18.666 07:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:18.666 07:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:18.666 07:21:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.666 07:21:27 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:18.666 07:21:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.666 00:06:18.666 real 0m3.915s 00:06:18.666 user 0m3.493s 00:06:18.666 sys 0m0.215s 00:06:18.666 07:21:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.666 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.666 ************************************ 00:06:18.666 END TEST accel_crc32c 00:06:18.666 ************************************ 00:06:18.666 07:21:27 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:18.666 07:21:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.666 07:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.666 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.666 ************************************ 00:06:18.666 START TEST accel_crc32c_C2 00:06:18.666 ************************************ 00:06:18.666 07:21:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:18.666 07:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.666 07:21:27 -- accel/accel.sh@17 -- # local accel_module 00:06:18.666 07:21:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.666 07:21:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.666 07:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.666 07:21:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.666 07:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.666 07:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.666 07:21:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.666 07:21:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.666 07:21:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.666 07:21:27 -- accel/accel.sh@42 -- # jq -r . 00:06:18.924 [2024-11-19 07:21:27.930351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.924 [2024-11-19 07:21:27.930455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:06:18.924 [2024-11-19 07:21:28.076686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.185 [2024-11-19 07:21:28.255370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.122 07:21:29 -- accel/accel.sh@18 -- # out=' 00:06:21.123 SPDK Configuration: 00:06:21.123 Core mask: 0x1 00:06:21.123 00:06:21.123 Accel Perf Configuration: 00:06:21.123 Workload Type: crc32c 00:06:21.123 CRC-32C seed: 0 00:06:21.123 Transfer size: 4096 bytes 00:06:21.123 Vector count 2 00:06:21.123 Module: software 00:06:21.123 Queue depth: 32 00:06:21.123 Allocate depth: 32 00:06:21.123 # threads/core: 1 00:06:21.123 Run time: 1 seconds 00:06:21.123 Verify: Yes 00:06:21.123 00:06:21.123 Running for 1 seconds... 00:06:21.123 00:06:21.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.123 ------------------------------------------------------------------------------------ 00:06:21.123 0,0 385056/s 3008 MiB/s 0 0 00:06:21.123 ==================================================================================== 00:06:21.123 Total 385056/s 1504 MiB/s 0 0' 00:06:21.123 07:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.123 07:21:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.123 07:21:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.123 07:21:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.123 07:21:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.123 07:21:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.123 07:21:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.123 07:21:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.123 07:21:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.123 07:21:29 -- accel/accel.sh@42 -- # jq -r . 00:06:21.123 [2024-11-19 07:21:29.967966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.123 [2024-11-19 07:21:29.968077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:06:21.123 [2024-11-19 07:21:30.112973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.123 [2024-11-19 07:21:30.258625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=0x1 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=0 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=software 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=32 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=32 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=1 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.123 07:21:30 -- accel/accel.sh@21 -- # val=Yes 00:06:21.123 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.123 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.381 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.382 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.382 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.382 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.382 07:21:30 -- accel/accel.sh@21 -- # val= 00:06:21.382 07:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.382 07:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:21.382 07:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@21 -- # val= 00:06:22.755 07:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # IFS=: 00:06:22.755 07:21:31 -- accel/accel.sh@20 -- # read -r var val 00:06:22.755 07:21:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.755 07:21:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:22.755 07:21:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.755 00:06:22.755 real 0m3.949s 00:06:22.755 user 0m3.510s 00:06:22.755 sys 0m0.231s 00:06:22.755 07:21:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.755 07:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.755 ************************************ 00:06:22.755 END TEST accel_crc32c_C2 00:06:22.755 ************************************ 00:06:22.755 07:21:31 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:22.755 07:21:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.755 07:21:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.755 07:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.755 ************************************ 00:06:22.755 START TEST accel_copy 00:06:22.755 ************************************ 00:06:22.755 07:21:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:22.755 07:21:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.755 07:21:31 -- accel/accel.sh@17 -- # local accel_module 00:06:22.755 07:21:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:22.755 07:21:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:22.755 07:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.755 07:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.755 07:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.755 07:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.755 07:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.755 07:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.755 07:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.755 07:21:31 -- accel/accel.sh@42 -- # jq -r . 00:06:22.756 [2024-11-19 07:21:31.913483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.756 [2024-11-19 07:21:31.913586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:06:23.013 [2024-11-19 07:21:32.059634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.013 [2024-11-19 07:21:32.197606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.915 07:21:33 -- accel/accel.sh@18 -- # out=' 00:06:24.915 SPDK Configuration: 00:06:24.915 Core mask: 0x1 00:06:24.915 00:06:24.915 Accel Perf Configuration: 00:06:24.915 Workload Type: copy 00:06:24.915 Transfer size: 4096 bytes 00:06:24.915 Vector count 1 00:06:24.915 Module: software 00:06:24.915 Queue depth: 32 00:06:24.915 Allocate depth: 32 00:06:24.915 # threads/core: 1 00:06:24.915 Run time: 1 seconds 00:06:24.915 Verify: Yes 00:06:24.915 00:06:24.915 Running for 1 seconds... 00:06:24.915 00:06:24.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.915 ------------------------------------------------------------------------------------ 00:06:24.915 0,0 375552/s 1467 MiB/s 0 0 00:06:24.915 ==================================================================================== 00:06:24.915 Total 375552/s 1467 MiB/s 0 0' 00:06:24.915 07:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:24.915 07:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:24.915 07:21:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:24.915 07:21:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:24.915 07:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.915 07:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.915 07:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.915 07:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.915 07:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.915 07:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.915 07:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.915 07:21:33 -- accel/accel.sh@42 -- # jq -r . 00:06:24.915 [2024-11-19 07:21:33.804810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.915 [2024-11-19 07:21:33.804914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:06:24.915 [2024-11-19 07:21:33.951721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.915 [2024-11-19 07:21:34.090171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val=0x1 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val=copy 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.173 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.173 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.173 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val=software 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val=32 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val=32 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val=1 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val=Yes 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:25.174 07:21:34 -- accel/accel.sh@21 -- # val= 00:06:25.174 07:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:25.174 07:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@21 -- # val= 00:06:26.549 07:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:26.549 07:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:26.549 07:21:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.549 07:21:35 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:26.549 07:21:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.549 00:06:26.549 real 0m3.786s 00:06:26.549 user 0m3.366s 00:06:26.549 sys 0m0.219s 00:06:26.549 07:21:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.549 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.549 ************************************ 00:06:26.549 END TEST accel_copy 00:06:26.549 ************************************ 00:06:26.549 07:21:35 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.549 07:21:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:26.549 07:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.549 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.549 ************************************ 00:06:26.549 START TEST accel_fill 00:06:26.549 ************************************ 00:06:26.549 07:21:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.549 07:21:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.549 07:21:35 -- accel/accel.sh@17 -- # local accel_module 00:06:26.550 07:21:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.550 07:21:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.550 07:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.550 07:21:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.550 07:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.550 07:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.550 07:21:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.550 07:21:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.550 07:21:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.550 07:21:35 -- accel/accel.sh@42 -- # jq -r . 00:06:26.550 [2024-11-19 07:21:35.740860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.550 [2024-11-19 07:21:35.740965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58745 ] 00:06:26.808 [2024-11-19 07:21:35.885943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.808 [2024-11-19 07:21:36.025271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.707 07:21:37 -- accel/accel.sh@18 -- # out=' 00:06:28.707 SPDK Configuration: 00:06:28.707 Core mask: 0x1 00:06:28.707 00:06:28.707 Accel Perf Configuration: 00:06:28.707 Workload Type: fill 00:06:28.707 Fill pattern: 0x80 00:06:28.707 Transfer size: 4096 bytes 00:06:28.707 Vector count 1 00:06:28.707 Module: software 00:06:28.707 Queue depth: 64 00:06:28.707 Allocate depth: 64 00:06:28.707 # threads/core: 1 00:06:28.708 Run time: 1 seconds 00:06:28.708 Verify: Yes 00:06:28.708 00:06:28.708 Running for 1 seconds... 00:06:28.708 00:06:28.708 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.708 ------------------------------------------------------------------------------------ 00:06:28.708 0,0 595840/s 2327 MiB/s 0 0 00:06:28.708 ==================================================================================== 00:06:28.708 Total 595840/s 2327 MiB/s 0 0' 00:06:28.708 07:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:28.708 07:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:28.708 07:21:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.708 07:21:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.708 07:21:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.708 07:21:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.708 07:21:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.708 07:21:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.708 07:21:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.708 07:21:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.708 07:21:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.708 07:21:37 -- accel/accel.sh@42 -- # jq -r . 00:06:28.708 [2024-11-19 07:21:37.628488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.708 [2024-11-19 07:21:37.628593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58771 ] 00:06:28.708 [2024-11-19 07:21:37.775687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.708 [2024-11-19 07:21:37.915758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=0x1 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=fill 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=0x80 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=software 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=64 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=64 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=1 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val=Yes 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.966 07:21:38 -- accel/accel.sh@21 -- # val= 00:06:28.966 07:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.966 07:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@21 -- # val= 00:06:30.340 07:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:30.340 07:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:30.340 07:21:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.340 07:21:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:30.340 07:21:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.340 00:06:30.340 real 0m3.776s 00:06:30.340 user 0m3.366s 00:06:30.340 sys 0m0.211s 00:06:30.340 07:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.340 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.340 ************************************ 00:06:30.340 END TEST accel_fill 00:06:30.340 ************************************ 00:06:30.340 07:21:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:30.340 07:21:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.340 07:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.340 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.340 ************************************ 00:06:30.340 START TEST accel_copy_crc32c 00:06:30.340 ************************************ 00:06:30.340 07:21:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:30.340 07:21:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.340 07:21:39 -- accel/accel.sh@17 -- # local accel_module 00:06:30.340 07:21:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:30.340 07:21:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:30.340 07:21:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.340 07:21:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.340 07:21:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.340 07:21:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.340 07:21:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.340 07:21:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.340 07:21:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.340 07:21:39 -- accel/accel.sh@42 -- # jq -r . 00:06:30.340 [2024-11-19 07:21:39.554022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.340 [2024-11-19 07:21:39.554126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58812 ] 00:06:30.599 [2024-11-19 07:21:39.701943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.857 [2024-11-19 07:21:39.873385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.758 07:21:41 -- accel/accel.sh@18 -- # out=' 00:06:32.758 SPDK Configuration: 00:06:32.758 Core mask: 0x1 00:06:32.758 00:06:32.758 Accel Perf Configuration: 00:06:32.758 Workload Type: copy_crc32c 00:06:32.758 CRC-32C seed: 0 00:06:32.758 Vector size: 4096 bytes 00:06:32.758 Transfer size: 4096 bytes 00:06:32.758 Vector count 1 00:06:32.758 Module: software 00:06:32.758 Queue depth: 32 00:06:32.758 Allocate depth: 32 00:06:32.758 # threads/core: 1 00:06:32.758 Run time: 1 seconds 00:06:32.758 Verify: Yes 00:06:32.758 00:06:32.758 Running for 1 seconds... 00:06:32.758 00:06:32.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.758 ------------------------------------------------------------------------------------ 00:06:32.758 0,0 238464/s 931 MiB/s 0 0 00:06:32.758 ==================================================================================== 00:06:32.758 Total 238464/s 931 MiB/s 0 0' 00:06:32.758 07:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:32.758 07:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:32.758 07:21:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.758 07:21:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.758 07:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.758 07:21:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.759 07:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.759 07:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.759 07:21:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.759 07:21:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.759 07:21:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.759 07:21:41 -- accel/accel.sh@42 -- # jq -r . 00:06:32.759 [2024-11-19 07:21:41.623333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.759 [2024-11-19 07:21:41.623437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:06:32.759 [2024-11-19 07:21:41.770873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.759 [2024-11-19 07:21:41.909639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=0x1 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=0 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=software 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=32 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=32 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=1 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val=Yes 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:33.016 07:21:42 -- accel/accel.sh@21 -- # val= 00:06:33.016 07:21:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # IFS=: 00:06:33.016 07:21:42 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@21 -- # val= 00:06:34.389 07:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # IFS=: 00:06:34.389 07:21:43 -- accel/accel.sh@20 -- # read -r var val 00:06:34.389 07:21:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.389 07:21:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:34.389 07:21:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.389 00:06:34.389 real 0m3.973s 00:06:34.389 user 0m3.552s 00:06:34.389 sys 0m0.218s 00:06:34.389 07:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.389 ************************************ 00:06:34.389 07:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.389 END TEST accel_copy_crc32c 00:06:34.389 ************************************ 00:06:34.389 07:21:43 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.389 07:21:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.389 07:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.389 07:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.389 ************************************ 00:06:34.389 START TEST accel_copy_crc32c_C2 00:06:34.389 ************************************ 00:06:34.389 07:21:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.389 07:21:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.389 07:21:43 -- accel/accel.sh@17 -- # local accel_module 00:06:34.389 07:21:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:34.389 07:21:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:34.389 07:21:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.389 07:21:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.389 07:21:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.389 07:21:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.389 07:21:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.389 07:21:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.389 07:21:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.389 07:21:43 -- accel/accel.sh@42 -- # jq -r . 00:06:34.389 [2024-11-19 07:21:43.562154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.389 [2024-11-19 07:21:43.562242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:06:34.647 [2024-11-19 07:21:43.697125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.647 [2024-11-19 07:21:43.837815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.553 07:21:45 -- accel/accel.sh@18 -- # out=' 00:06:36.553 SPDK Configuration: 00:06:36.553 Core mask: 0x1 00:06:36.553 00:06:36.553 Accel Perf Configuration: 00:06:36.553 Workload Type: copy_crc32c 00:06:36.553 CRC-32C seed: 0 00:06:36.553 Vector size: 4096 bytes 00:06:36.553 Transfer size: 8192 bytes 00:06:36.553 Vector count 2 00:06:36.553 Module: software 00:06:36.553 Queue depth: 32 00:06:36.553 Allocate depth: 32 00:06:36.553 # threads/core: 1 00:06:36.553 Run time: 1 seconds 00:06:36.553 Verify: Yes 00:06:36.553 00:06:36.554 Running for 1 seconds... 00:06:36.554 00:06:36.554 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.554 ------------------------------------------------------------------------------------ 00:06:36.554 0,0 230784/s 1803 MiB/s 0 0 00:06:36.554 ==================================================================================== 00:06:36.554 Total 230784/s 901 MiB/s 0 0' 00:06:36.554 07:21:45 -- accel/accel.sh@20 -- # IFS=: 00:06:36.554 07:21:45 -- accel/accel.sh@20 -- # read -r var val 00:06:36.554 07:21:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:36.554 07:21:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:36.554 07:21:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.554 07:21:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.554 07:21:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.554 07:21:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.554 07:21:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.554 07:21:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.554 07:21:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.554 07:21:45 -- accel/accel.sh@42 -- # jq -r . 00:06:36.554 [2024-11-19 07:21:45.465317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.554 [2024-11-19 07:21:45.465424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:06:36.554 [2024-11-19 07:21:45.613210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.814 [2024-11-19 07:21:45.861112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=0x1 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=0 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=software 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=32 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=32 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=1 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val=Yes 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:36.814 07:21:46 -- accel/accel.sh@21 -- # val= 00:06:36.814 07:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # IFS=: 00:06:36.814 07:21:46 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@21 -- # val= 00:06:38.728 07:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # IFS=: 00:06:38.728 07:21:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.728 07:21:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.728 07:21:47 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:38.728 07:21:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.728 00:06:38.728 real 0m3.969s 00:06:38.728 user 0m3.504s 00:06:38.728 sys 0m0.252s 00:06:38.728 07:21:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.728 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.728 ************************************ 00:06:38.728 END TEST accel_copy_crc32c_C2 00:06:38.728 ************************************ 00:06:38.728 07:21:47 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.728 07:21:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.728 07:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.728 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.728 ************************************ 00:06:38.728 START TEST accel_dualcast 00:06:38.728 ************************************ 00:06:38.728 07:21:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:38.728 07:21:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.728 07:21:47 -- accel/accel.sh@17 -- # local accel_module 00:06:38.728 07:21:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:38.728 07:21:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.728 07:21:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.728 07:21:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.728 07:21:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.728 07:21:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.728 07:21:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.728 07:21:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.728 07:21:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.728 07:21:47 -- accel/accel.sh@42 -- # jq -r . 00:06:38.728 [2024-11-19 07:21:47.570531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.728 [2024-11-19 07:21:47.570635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:06:38.728 [2024-11-19 07:21:47.717248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.728 [2024-11-19 07:21:47.870952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.723 07:21:49 -- accel/accel.sh@18 -- # out=' 00:06:40.723 SPDK Configuration: 00:06:40.723 Core mask: 0x1 00:06:40.723 00:06:40.723 Accel Perf Configuration: 00:06:40.723 Workload Type: dualcast 00:06:40.723 Transfer size: 4096 bytes 00:06:40.723 Vector count 1 00:06:40.723 Module: software 00:06:40.723 Queue depth: 32 00:06:40.723 Allocate depth: 32 00:06:40.723 # threads/core: 1 00:06:40.723 Run time: 1 seconds 00:06:40.723 Verify: Yes 00:06:40.723 00:06:40.723 Running for 1 seconds... 00:06:40.723 00:06:40.723 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.723 ------------------------------------------------------------------------------------ 00:06:40.723 0,0 417696/s 1631 MiB/s 0 0 00:06:40.723 ==================================================================================== 00:06:40.723 Total 417696/s 1631 MiB/s 0 0' 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.723 07:21:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.723 07:21:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.723 07:21:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.723 07:21:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.723 07:21:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.723 07:21:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.723 07:21:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.723 07:21:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.723 07:21:49 -- accel/accel.sh@42 -- # jq -r . 00:06:40.723 [2024-11-19 07:21:49.503724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.723 [2024-11-19 07:21:49.503832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:06:40.723 [2024-11-19 07:21:49.650600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.723 [2024-11-19 07:21:49.803906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val=0x1 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.723 07:21:49 -- accel/accel.sh@21 -- # val=dualcast 00:06:40.723 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.723 07:21:49 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.723 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val=software 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val=32 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val=32 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val=1 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val=Yes 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:40.724 07:21:49 -- accel/accel.sh@21 -- # val= 00:06:40.724 07:21:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # IFS=: 00:06:40.724 07:21:49 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@21 -- # val= 00:06:42.636 07:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # IFS=: 00:06:42.636 07:21:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.636 07:21:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.636 07:21:51 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:42.636 07:21:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.636 ************************************ 00:06:42.636 END TEST accel_dualcast 00:06:42.636 ************************************ 00:06:42.636 00:06:42.636 real 0m3.865s 00:06:42.636 user 0m3.424s 00:06:42.636 sys 0m0.235s 00:06:42.636 07:21:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.636 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.636 07:21:51 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:42.636 07:21:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.636 07:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.636 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.636 ************************************ 00:06:42.636 START TEST accel_compare 00:06:42.636 ************************************ 00:06:42.636 07:21:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:42.636 07:21:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.636 07:21:51 -- accel/accel.sh@17 -- # local accel_module 00:06:42.636 07:21:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:42.636 07:21:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:42.636 07:21:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.636 07:21:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.636 07:21:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.636 07:21:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.636 07:21:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.636 07:21:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.636 07:21:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.636 07:21:51 -- accel/accel.sh@42 -- # jq -r . 00:06:42.636 [2024-11-19 07:21:51.489161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.636 [2024-11-19 07:21:51.489278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59009 ] 00:06:42.636 [2024-11-19 07:21:51.635578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.636 [2024-11-19 07:21:51.803151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.541 07:21:53 -- accel/accel.sh@18 -- # out=' 00:06:44.541 SPDK Configuration: 00:06:44.541 Core mask: 0x1 00:06:44.541 00:06:44.541 Accel Perf Configuration: 00:06:44.541 Workload Type: compare 00:06:44.541 Transfer size: 4096 bytes 00:06:44.541 Vector count 1 00:06:44.541 Module: software 00:06:44.541 Queue depth: 32 00:06:44.541 Allocate depth: 32 00:06:44.541 # threads/core: 1 00:06:44.541 Run time: 1 seconds 00:06:44.541 Verify: Yes 00:06:44.541 00:06:44.541 Running for 1 seconds... 00:06:44.541 00:06:44.541 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.541 ------------------------------------------------------------------------------------ 00:06:44.541 0,0 536608/s 2096 MiB/s 0 0 00:06:44.541 ==================================================================================== 00:06:44.541 Total 536608/s 2096 MiB/s 0 0' 00:06:44.541 07:21:53 -- accel/accel.sh@20 -- # IFS=: 00:06:44.541 07:21:53 -- accel/accel.sh@20 -- # read -r var val 00:06:44.541 07:21:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.541 07:21:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.541 07:21:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.541 07:21:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.541 07:21:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.541 07:21:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.541 07:21:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.541 07:21:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.541 07:21:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.541 07:21:53 -- accel/accel.sh@42 -- # jq -r . 00:06:44.541 [2024-11-19 07:21:53.434608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.541 [2024-11-19 07:21:53.434715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:06:44.541 [2024-11-19 07:21:53.584991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.802 [2024-11-19 07:21:53.841010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.802 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.802 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.802 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.802 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.802 07:21:54 -- accel/accel.sh@21 -- # val=0x1 00:06:44.802 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.802 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=compare 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=software 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=32 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=32 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=1 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val=Yes 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.803 07:21:54 -- accel/accel.sh@21 -- # val= 00:06:44.803 07:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # IFS=: 00:06:44.803 07:21:54 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@21 -- # val= 00:06:46.718 07:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # IFS=: 00:06:46.718 07:21:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.718 07:21:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.718 07:21:55 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:46.718 07:21:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.718 00:06:46.718 real 0m4.234s 00:06:46.718 user 0m3.734s 00:06:46.718 sys 0m0.287s 00:06:46.718 07:21:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.718 ************************************ 00:06:46.718 END TEST accel_compare 00:06:46.718 ************************************ 00:06:46.718 07:21:55 -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 07:21:55 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:46.718 07:21:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.718 07:21:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.718 07:21:55 -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 ************************************ 00:06:46.718 START TEST accel_xor 00:06:46.718 ************************************ 00:06:46.718 07:21:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:46.718 07:21:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.718 07:21:55 -- accel/accel.sh@17 -- # local accel_module 00:06:46.718 07:21:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:46.718 07:21:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.718 07:21:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.718 07:21:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.718 07:21:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.718 07:21:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.718 07:21:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.718 07:21:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.718 07:21:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.718 07:21:55 -- accel/accel.sh@42 -- # jq -r . 00:06:46.718 [2024-11-19 07:21:55.786880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.718 [2024-11-19 07:21:55.787022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59083 ] 00:06:46.718 [2024-11-19 07:21:55.939525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.980 [2024-11-19 07:21:56.200304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.897 07:21:58 -- accel/accel.sh@18 -- # out=' 00:06:48.897 SPDK Configuration: 00:06:48.897 Core mask: 0x1 00:06:48.897 00:06:48.897 Accel Perf Configuration: 00:06:48.897 Workload Type: xor 00:06:48.897 Source buffers: 2 00:06:48.897 Transfer size: 4096 bytes 00:06:48.897 Vector count 1 00:06:48.897 Module: software 00:06:48.897 Queue depth: 32 00:06:48.897 Allocate depth: 32 00:06:48.897 # threads/core: 1 00:06:48.897 Run time: 1 seconds 00:06:48.897 Verify: Yes 00:06:48.898 00:06:48.898 Running for 1 seconds... 00:06:48.898 00:06:48.898 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.898 ------------------------------------------------------------------------------------ 00:06:48.898 0,0 333856/s 1304 MiB/s 0 0 00:06:48.898 ==================================================================================== 00:06:48.898 Total 333856/s 1304 MiB/s 0 0' 00:06:48.898 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:48.898 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:48.898 07:21:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:48.898 07:21:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.898 07:21:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.898 07:21:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.898 07:21:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.898 07:21:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.898 07:21:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.898 07:21:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.898 07:21:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.898 07:21:58 -- accel/accel.sh@42 -- # jq -r . 00:06:48.898 [2024-11-19 07:21:58.094482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.898 [2024-11-19 07:21:58.094617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:06:49.159 [2024-11-19 07:21:58.246907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.421 [2024-11-19 07:21:58.474342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=0x1 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=xor 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=2 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=software 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=32 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=32 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=1 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val=Yes 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.421 07:21:58 -- accel/accel.sh@21 -- # val= 00:06:49.421 07:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # IFS=: 00:06:49.421 07:21:58 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@21 -- # val= 00:06:51.331 07:22:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # IFS=: 00:06:51.331 07:22:00 -- accel/accel.sh@20 -- # read -r var val 00:06:51.331 07:22:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.331 07:22:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:51.331 07:22:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.331 ************************************ 00:06:51.331 END TEST accel_xor 00:06:51.331 ************************************ 00:06:51.331 00:06:51.331 real 0m4.367s 00:06:51.331 user 0m3.810s 00:06:51.331 sys 0m0.335s 00:06:51.331 07:22:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.331 07:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.331 07:22:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.331 07:22:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.331 07:22:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.331 07:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.331 ************************************ 00:06:51.331 START TEST accel_xor 00:06:51.331 ************************************ 00:06:51.331 07:22:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.331 07:22:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.331 07:22:00 -- accel/accel.sh@17 -- # local accel_module 00:06:51.331 07:22:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.331 07:22:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.331 07:22:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.331 07:22:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.331 07:22:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.331 07:22:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.331 07:22:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.331 07:22:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.331 07:22:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.331 07:22:00 -- accel/accel.sh@42 -- # jq -r . 00:06:51.331 [2024-11-19 07:22:00.211479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.331 [2024-11-19 07:22:00.211584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:06:51.331 [2024-11-19 07:22:00.359609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.331 [2024-11-19 07:22:00.511836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.241 07:22:02 -- accel/accel.sh@18 -- # out=' 00:06:53.241 SPDK Configuration: 00:06:53.241 Core mask: 0x1 00:06:53.241 00:06:53.241 Accel Perf Configuration: 00:06:53.241 Workload Type: xor 00:06:53.241 Source buffers: 3 00:06:53.241 Transfer size: 4096 bytes 00:06:53.241 Vector count 1 00:06:53.241 Module: software 00:06:53.241 Queue depth: 32 00:06:53.241 Allocate depth: 32 00:06:53.241 # threads/core: 1 00:06:53.241 Run time: 1 seconds 00:06:53.241 Verify: Yes 00:06:53.241 00:06:53.241 Running for 1 seconds... 00:06:53.241 00:06:53.241 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.241 ------------------------------------------------------------------------------------ 00:06:53.241 0,0 423872/s 1655 MiB/s 0 0 00:06:53.241 ==================================================================================== 00:06:53.241 Total 423872/s 1655 MiB/s 0 0' 00:06:53.241 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.241 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.241 07:22:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.241 07:22:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.241 07:22:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.241 07:22:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.241 07:22:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.241 07:22:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.241 07:22:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.241 07:22:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.241 07:22:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.241 07:22:02 -- accel/accel.sh@42 -- # jq -r . 00:06:53.241 [2024-11-19 07:22:02.128717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.241 [2024-11-19 07:22:02.128817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59180 ] 00:06:53.241 [2024-11-19 07:22:02.270951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.241 [2024-11-19 07:22:02.421394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val=0x1 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.499 07:22:02 -- accel/accel.sh@21 -- # val=xor 00:06:53.499 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.499 07:22:02 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:53.499 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=3 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=software 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=32 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=32 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=1 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val=Yes 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:53.500 07:22:02 -- accel/accel.sh@21 -- # val= 00:06:53.500 07:22:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # IFS=: 00:06:53.500 07:22:02 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:03 -- accel/accel.sh@21 -- # val= 00:06:54.929 07:22:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # IFS=: 00:06:54.929 07:22:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.929 07:22:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.929 07:22:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:54.929 07:22:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.929 00:06:54.929 real 0m3.827s 00:06:54.929 user 0m3.387s 00:06:54.929 sys 0m0.238s 00:06:54.929 07:22:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.929 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:54.929 ************************************ 00:06:54.929 END TEST accel_xor 00:06:54.929 ************************************ 00:06:54.929 07:22:04 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.929 07:22:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:54.929 07:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.929 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:54.929 ************************************ 00:06:54.929 START TEST accel_dif_verify 00:06:54.929 ************************************ 00:06:54.929 07:22:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:54.929 07:22:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.929 07:22:04 -- accel/accel.sh@17 -- # local accel_module 00:06:54.929 07:22:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:54.929 07:22:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.929 07:22:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.929 07:22:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.929 07:22:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.929 07:22:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.929 07:22:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.929 07:22:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.929 07:22:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.929 07:22:04 -- accel/accel.sh@42 -- # jq -r . 00:06:54.929 [2024-11-19 07:22:04.094975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.929 [2024-11-19 07:22:04.095080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59221 ] 00:06:55.190 [2024-11-19 07:22:04.244043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.453 [2024-11-19 07:22:04.474278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.414 07:22:06 -- accel/accel.sh@18 -- # out=' 00:06:57.414 SPDK Configuration: 00:06:57.414 Core mask: 0x1 00:06:57.414 00:06:57.414 Accel Perf Configuration: 00:06:57.414 Workload Type: dif_verify 00:06:57.414 Vector size: 4096 bytes 00:06:57.414 Transfer size: 4096 bytes 00:06:57.414 Block size: 512 bytes 00:06:57.414 Metadata size: 8 bytes 00:06:57.414 Vector count 1 00:06:57.414 Module: software 00:06:57.414 Queue depth: 32 00:06:57.414 Allocate depth: 32 00:06:57.414 # threads/core: 1 00:06:57.414 Run time: 1 seconds 00:06:57.414 Verify: No 00:06:57.414 00:06:57.414 Running for 1 seconds... 00:06:57.414 00:06:57.414 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.414 ------------------------------------------------------------------------------------ 00:06:57.414 0,0 98016/s 388 MiB/s 0 0 00:06:57.414 ==================================================================================== 00:06:57.414 Total 98016/s 382 MiB/s 0 0' 00:06:57.414 07:22:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:57.414 07:22:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:57.414 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.414 07:22:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.414 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.414 07:22:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.414 07:22:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.414 07:22:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.414 07:22:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.414 07:22:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.414 07:22:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.414 07:22:06 -- accel/accel.sh@42 -- # jq -r . 00:06:57.414 [2024-11-19 07:22:06.336009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.414 [2024-11-19 07:22:06.336121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:06:57.414 [2024-11-19 07:22:06.481666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.674 [2024-11-19 07:22:06.699116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=0x1 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=dif_verify 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=software 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=32 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=32 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=1 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val=No 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.674 07:22:06 -- accel/accel.sh@21 -- # val= 00:06:57.674 07:22:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # IFS=: 00:06:57.674 07:22:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@21 -- # val= 00:06:59.593 07:22:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # IFS=: 00:06:59.593 07:22:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.593 07:22:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.593 07:22:08 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:59.593 07:22:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.593 ************************************ 00:06:59.593 END TEST accel_dif_verify 00:06:59.593 ************************************ 00:06:59.593 00:06:59.593 real 0m4.399s 00:06:59.593 user 0m3.903s 00:06:59.593 sys 0m0.278s 00:06:59.593 07:22:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.593 07:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.593 07:22:08 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:59.593 07:22:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.593 07:22:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.593 07:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.593 ************************************ 00:06:59.593 START TEST accel_dif_generate 00:06:59.593 ************************************ 00:06:59.593 07:22:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:59.593 07:22:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.593 07:22:08 -- accel/accel.sh@17 -- # local accel_module 00:06:59.593 07:22:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:59.593 07:22:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.593 07:22:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.593 07:22:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.593 07:22:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.593 07:22:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.593 07:22:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.593 07:22:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.593 07:22:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.593 07:22:08 -- accel/accel.sh@42 -- # jq -r . 00:06:59.593 [2024-11-19 07:22:08.555491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.593 [2024-11-19 07:22:08.555600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:06:59.593 [2024-11-19 07:22:08.717843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.854 [2024-11-19 07:22:08.929925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.775 07:22:10 -- accel/accel.sh@18 -- # out=' 00:07:01.775 SPDK Configuration: 00:07:01.775 Core mask: 0x1 00:07:01.775 00:07:01.775 Accel Perf Configuration: 00:07:01.775 Workload Type: dif_generate 00:07:01.775 Vector size: 4096 bytes 00:07:01.775 Transfer size: 4096 bytes 00:07:01.775 Block size: 512 bytes 00:07:01.775 Metadata size: 8 bytes 00:07:01.775 Vector count 1 00:07:01.775 Module: software 00:07:01.775 Queue depth: 32 00:07:01.775 Allocate depth: 32 00:07:01.775 # threads/core: 1 00:07:01.775 Run time: 1 seconds 00:07:01.775 Verify: No 00:07:01.775 00:07:01.775 Running for 1 seconds... 00:07:01.775 00:07:01.775 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.775 ------------------------------------------------------------------------------------ 00:07:01.775 0,0 117920/s 467 MiB/s 0 0 00:07:01.775 ==================================================================================== 00:07:01.775 Total 117920/s 460 MiB/s 0 0' 00:07:01.775 07:22:10 -- accel/accel.sh@20 -- # IFS=: 00:07:01.775 07:22:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:01.775 07:22:10 -- accel/accel.sh@20 -- # read -r var val 00:07:01.775 07:22:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:01.775 07:22:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.775 07:22:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.775 07:22:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.775 07:22:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.775 07:22:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.775 07:22:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.775 07:22:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.775 07:22:10 -- accel/accel.sh@42 -- # jq -r . 00:07:01.775 [2024-11-19 07:22:10.817845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.775 [2024-11-19 07:22:10.817984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59314 ] 00:07:01.775 [2024-11-19 07:22:10.976324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.036 [2024-11-19 07:22:11.228144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=0x1 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=dif_generate 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=software 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=32 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=32 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=1 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val=No 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:02.299 07:22:11 -- accel/accel.sh@21 -- # val= 00:07:02.299 07:22:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # IFS=: 00:07:02.299 07:22:11 -- accel/accel.sh@20 -- # read -r var val 00:07:04.214 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.214 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.214 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.214 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.214 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.214 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.214 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.214 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.214 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.214 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.214 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.215 07:22:13 -- accel/accel.sh@21 -- # val= 00:07:04.215 07:22:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.215 07:22:13 -- accel/accel.sh@20 -- # IFS=: 00:07:04.215 07:22:13 -- accel/accel.sh@20 -- # read -r var val 00:07:04.215 07:22:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.215 07:22:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:04.215 07:22:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.215 00:07:04.215 real 0m4.564s 00:07:04.215 user 0m4.004s 00:07:04.215 sys 0m0.341s 00:07:04.215 07:22:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.215 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.215 ************************************ 00:07:04.215 END TEST accel_dif_generate 00:07:04.215 ************************************ 00:07:04.215 07:22:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:04.215 07:22:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:04.215 07:22:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.215 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.215 ************************************ 00:07:04.215 START TEST accel_dif_generate_copy 00:07:04.215 ************************************ 00:07:04.215 07:22:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:04.215 07:22:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.215 07:22:13 -- accel/accel.sh@17 -- # local accel_module 00:07:04.215 07:22:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:04.215 07:22:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:04.215 07:22:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.215 07:22:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.215 07:22:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.215 07:22:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.215 07:22:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.215 07:22:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.215 07:22:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.215 07:22:13 -- accel/accel.sh@42 -- # jq -r . 00:07:04.215 [2024-11-19 07:22:13.183708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.215 [2024-11-19 07:22:13.183832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59361 ] 00:07:04.215 [2024-11-19 07:22:13.328856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.475 [2024-11-19 07:22:13.552858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.403 07:22:15 -- accel/accel.sh@18 -- # out=' 00:07:06.403 SPDK Configuration: 00:07:06.403 Core mask: 0x1 00:07:06.403 00:07:06.403 Accel Perf Configuration: 00:07:06.403 Workload Type: dif_generate_copy 00:07:06.403 Vector size: 4096 bytes 00:07:06.403 Transfer size: 4096 bytes 00:07:06.403 Vector count 1 00:07:06.403 Module: software 00:07:06.403 Queue depth: 32 00:07:06.403 Allocate depth: 32 00:07:06.403 # threads/core: 1 00:07:06.403 Run time: 1 seconds 00:07:06.403 Verify: No 00:07:06.403 00:07:06.403 Running for 1 seconds... 00:07:06.403 00:07:06.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.403 ------------------------------------------------------------------------------------ 00:07:06.403 0,0 90144/s 357 MiB/s 0 0 00:07:06.403 ==================================================================================== 00:07:06.403 Total 90144/s 352 MiB/s 0 0' 00:07:06.403 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.403 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.403 07:22:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:06.403 07:22:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:06.403 07:22:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.403 07:22:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.403 07:22:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.403 07:22:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.403 07:22:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.403 07:22:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.403 07:22:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.403 07:22:15 -- accel/accel.sh@42 -- # jq -r . 00:07:06.403 [2024-11-19 07:22:15.434764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.403 [2024-11-19 07:22:15.434894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:07:06.403 [2024-11-19 07:22:15.586740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.665 [2024-11-19 07:22:15.827794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=0x1 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=software 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=32 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=32 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val=1 00:07:06.923 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.923 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.923 07:22:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.924 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.924 07:22:15 -- accel/accel.sh@21 -- # val=No 00:07:06.924 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.924 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.924 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.924 07:22:15 -- accel/accel.sh@21 -- # val= 00:07:06.924 07:22:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # IFS=: 00:07:06.924 07:22:15 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@21 -- # val= 00:07:08.822 07:22:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # IFS=: 00:07:08.822 07:22:17 -- accel/accel.sh@20 -- # read -r var val 00:07:08.822 07:22:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.822 07:22:17 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:08.822 07:22:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.822 00:07:08.822 real 0m4.440s 00:07:08.822 user 0m3.887s 00:07:08.822 sys 0m0.336s 00:07:08.822 ************************************ 00:07:08.822 END TEST accel_dif_generate_copy 00:07:08.822 ************************************ 00:07:08.822 07:22:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.822 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.822 07:22:17 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:08.822 07:22:17 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.822 07:22:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:08.822 07:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.822 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.822 ************************************ 00:07:08.822 START TEST accel_comp 00:07:08.822 ************************************ 00:07:08.822 07:22:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.822 07:22:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.822 07:22:17 -- accel/accel.sh@17 -- # local accel_module 00:07:08.822 07:22:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.822 07:22:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.822 07:22:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.822 07:22:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.822 07:22:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.822 07:22:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.823 07:22:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.823 07:22:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.823 07:22:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.823 07:22:17 -- accel/accel.sh@42 -- # jq -r . 00:07:08.823 [2024-11-19 07:22:17.686931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.823 [2024-11-19 07:22:17.687035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59433 ] 00:07:08.823 [2024-11-19 07:22:17.837805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.823 [2024-11-19 07:22:18.014355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.719 07:22:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.719 00:07:10.719 SPDK Configuration: 00:07:10.719 Core mask: 0x1 00:07:10.719 00:07:10.719 Accel Perf Configuration: 00:07:10.719 Workload Type: compress 00:07:10.719 Transfer size: 4096 bytes 00:07:10.719 Vector count 1 00:07:10.719 Module: software 00:07:10.719 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.719 Queue depth: 32 00:07:10.719 Allocate depth: 32 00:07:10.719 # threads/core: 1 00:07:10.719 Run time: 1 seconds 00:07:10.719 Verify: No 00:07:10.719 00:07:10.719 Running for 1 seconds... 00:07:10.719 00:07:10.719 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.719 ------------------------------------------------------------------------------------ 00:07:10.720 0,0 48960/s 204 MiB/s 0 0 00:07:10.720 ==================================================================================== 00:07:10.720 Total 48960/s 191 MiB/s 0 0' 00:07:10.720 07:22:19 -- accel/accel.sh@20 -- # IFS=: 00:07:10.720 07:22:19 -- accel/accel.sh@20 -- # read -r var val 00:07:10.720 07:22:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.720 07:22:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.720 07:22:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.720 07:22:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.720 07:22:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.720 07:22:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.720 07:22:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.720 07:22:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.720 07:22:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.720 07:22:19 -- accel/accel.sh@42 -- # jq -r . 00:07:10.720 [2024-11-19 07:22:19.798604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.720 [2024-11-19 07:22:19.798832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:07:10.720 [2024-11-19 07:22:19.949223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.978 [2024-11-19 07:22:20.132544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=0x1 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=compress 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=software 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=32 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=32 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=1 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val=No 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:11.236 07:22:20 -- accel/accel.sh@21 -- # val= 00:07:11.236 07:22:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # IFS=: 00:07:11.236 07:22:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.139 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.139 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.139 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.139 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.139 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.139 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.139 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.139 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.139 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.139 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.139 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.140 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.140 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.140 07:22:21 -- accel/accel.sh@21 -- # val= 00:07:13.140 07:22:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.140 07:22:21 -- accel/accel.sh@20 -- # IFS=: 00:07:13.140 07:22:21 -- accel/accel.sh@20 -- # read -r var val 00:07:13.140 ************************************ 00:07:13.140 END TEST accel_comp 00:07:13.140 ************************************ 00:07:13.140 07:22:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.140 07:22:21 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:13.140 07:22:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.140 00:07:13.140 real 0m4.235s 00:07:13.140 user 0m3.773s 00:07:13.140 sys 0m0.249s 00:07:13.140 07:22:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.140 07:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:13.140 07:22:21 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.140 07:22:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:13.140 07:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.140 07:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:13.140 ************************************ 00:07:13.140 START TEST accel_decomp 00:07:13.140 ************************************ 00:07:13.140 07:22:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.140 07:22:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.140 07:22:21 -- accel/accel.sh@17 -- # local accel_module 00:07:13.140 07:22:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.140 07:22:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.140 07:22:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.140 07:22:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.140 07:22:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.140 07:22:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.140 07:22:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.140 07:22:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.140 07:22:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.140 07:22:21 -- accel/accel.sh@42 -- # jq -r . 00:07:13.140 [2024-11-19 07:22:21.982946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.140 [2024-11-19 07:22:21.983051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59501 ] 00:07:13.140 [2024-11-19 07:22:22.133091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.140 [2024-11-19 07:22:22.313386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.040 07:22:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.040 00:07:15.040 SPDK Configuration: 00:07:15.040 Core mask: 0x1 00:07:15.040 00:07:15.040 Accel Perf Configuration: 00:07:15.040 Workload Type: decompress 00:07:15.040 Transfer size: 4096 bytes 00:07:15.040 Vector count 1 00:07:15.040 Module: software 00:07:15.040 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.040 Queue depth: 32 00:07:15.040 Allocate depth: 32 00:07:15.040 # threads/core: 1 00:07:15.040 Run time: 1 seconds 00:07:15.040 Verify: Yes 00:07:15.040 00:07:15.040 Running for 1 seconds... 00:07:15.040 00:07:15.040 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.040 ------------------------------------------------------------------------------------ 00:07:15.040 0,0 62336/s 114 MiB/s 0 0 00:07:15.040 ==================================================================================== 00:07:15.040 Total 62336/s 243 MiB/s 0 0' 00:07:15.040 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.040 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.040 07:22:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.040 07:22:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.040 07:22:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.040 07:22:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.040 07:22:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.040 07:22:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.040 07:22:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.040 07:22:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.040 07:22:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.040 07:22:24 -- accel/accel.sh@42 -- # jq -r . 00:07:15.040 [2024-11-19 07:22:24.104953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.040 [2024-11-19 07:22:24.105061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59527 ] 00:07:15.040 [2024-11-19 07:22:24.255044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.300 [2024-11-19 07:22:24.436200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=0x1 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=decompress 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=software 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=32 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=32 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=1 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val=Yes 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.561 07:22:24 -- accel/accel.sh@21 -- # val= 00:07:15.561 07:22:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # IFS=: 00:07:15.561 07:22:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 07:22:26 -- accel/accel.sh@21 -- # val= 00:07:17.470 07:22:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # IFS=: 00:07:17.470 07:22:26 -- accel/accel.sh@20 -- # read -r var val 00:07:17.470 ************************************ 00:07:17.470 END TEST accel_decomp 00:07:17.470 ************************************ 00:07:17.470 07:22:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.470 07:22:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.470 07:22:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.470 00:07:17.470 real 0m4.305s 00:07:17.470 user 0m3.823s 00:07:17.470 sys 0m0.270s 00:07:17.470 07:22:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.470 07:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 07:22:26 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.470 07:22:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:17.470 07:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.470 07:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 ************************************ 00:07:17.470 START TEST accel_decmop_full 00:07:17.470 ************************************ 00:07:17.470 07:22:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.470 07:22:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.470 07:22:26 -- accel/accel.sh@17 -- # local accel_module 00:07:17.470 07:22:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.470 07:22:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.470 07:22:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.470 07:22:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.470 07:22:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.470 07:22:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.470 07:22:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.470 07:22:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.470 07:22:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.470 07:22:26 -- accel/accel.sh@42 -- # jq -r . 00:07:17.470 [2024-11-19 07:22:26.350539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.470 [2024-11-19 07:22:26.350644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59574 ] 00:07:17.470 [2024-11-19 07:22:26.499112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.470 [2024-11-19 07:22:26.650656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.414 07:22:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.414 00:07:19.414 SPDK Configuration: 00:07:19.414 Core mask: 0x1 00:07:19.414 00:07:19.414 Accel Perf Configuration: 00:07:19.414 Workload Type: decompress 00:07:19.414 Transfer size: 111250 bytes 00:07:19.414 Vector count 1 00:07:19.414 Module: software 00:07:19.414 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.414 Queue depth: 32 00:07:19.414 Allocate depth: 32 00:07:19.414 # threads/core: 1 00:07:19.414 Run time: 1 seconds 00:07:19.414 Verify: Yes 00:07:19.414 00:07:19.414 Running for 1 seconds... 00:07:19.414 00:07:19.414 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.414 ------------------------------------------------------------------------------------ 00:07:19.414 0,0 5472/s 226 MiB/s 0 0 00:07:19.414 ==================================================================================== 00:07:19.414 Total 5472/s 580 MiB/s 0 0' 00:07:19.414 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.414 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.414 07:22:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:19.414 07:22:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:19.414 07:22:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.414 07:22:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.414 07:22:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.414 07:22:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.414 07:22:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.414 07:22:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.414 07:22:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.414 07:22:28 -- accel/accel.sh@42 -- # jq -r . 00:07:19.414 [2024-11-19 07:22:28.292367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.414 [2024-11-19 07:22:28.292968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:07:19.414 [2024-11-19 07:22:28.439319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.414 [2024-11-19 07:22:28.590552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=0x1 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=decompress 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=software 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=32 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=32 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=1 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val=Yes 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.672 07:22:28 -- accel/accel.sh@21 -- # val= 00:07:19.672 07:22:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # IFS=: 00:07:19.672 07:22:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@21 -- # val= 00:07:21.045 07:22:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # IFS=: 00:07:21.045 07:22:30 -- accel/accel.sh@20 -- # read -r var val 00:07:21.045 07:22:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.045 07:22:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.045 07:22:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.045 00:07:21.045 real 0m3.888s 00:07:21.045 user 0m3.455s 00:07:21.045 sys 0m0.222s 00:07:21.045 ************************************ 00:07:21.045 END TEST accel_decmop_full 00:07:21.045 ************************************ 00:07:21.045 07:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.045 07:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.045 07:22:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.045 07:22:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:21.045 07:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.045 07:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.045 ************************************ 00:07:21.045 START TEST accel_decomp_mcore 00:07:21.045 ************************************ 00:07:21.045 07:22:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.045 07:22:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.045 07:22:30 -- accel/accel.sh@17 -- # local accel_module 00:07:21.045 07:22:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.045 07:22:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.045 07:22:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.045 07:22:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.045 07:22:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.045 07:22:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.045 07:22:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.045 07:22:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.045 07:22:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.045 07:22:30 -- accel/accel.sh@42 -- # jq -r . 00:07:21.045 [2024-11-19 07:22:30.296429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.045 [2024-11-19 07:22:30.296532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59635 ] 00:07:21.303 [2024-11-19 07:22:30.446410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.583 [2024-11-19 07:22:30.641388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.583 [2024-11-19 07:22:30.641921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.583 [2024-11-19 07:22:30.642120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.583 [2024-11-19 07:22:30.642107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.492 07:22:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.492 00:07:23.492 SPDK Configuration: 00:07:23.492 Core mask: 0xf 00:07:23.492 00:07:23.492 Accel Perf Configuration: 00:07:23.492 Workload Type: decompress 00:07:23.492 Transfer size: 4096 bytes 00:07:23.492 Vector count 1 00:07:23.492 Module: software 00:07:23.492 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.492 Queue depth: 32 00:07:23.492 Allocate depth: 32 00:07:23.492 # threads/core: 1 00:07:23.492 Run time: 1 seconds 00:07:23.492 Verify: Yes 00:07:23.492 00:07:23.492 Running for 1 seconds... 00:07:23.492 00:07:23.492 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.492 ------------------------------------------------------------------------------------ 00:07:23.492 0,0 56000/s 103 MiB/s 0 0 00:07:23.492 3,0 55712/s 102 MiB/s 0 0 00:07:23.492 2,0 55680/s 102 MiB/s 0 0 00:07:23.492 1,0 55872/s 102 MiB/s 0 0 00:07:23.492 ==================================================================================== 00:07:23.492 Total 223264/s 872 MiB/s 0 0' 00:07:23.492 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.492 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.492 07:22:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.492 07:22:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.492 07:22:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.492 07:22:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.492 07:22:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.492 07:22:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.492 07:22:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.492 07:22:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.492 07:22:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.492 07:22:32 -- accel/accel.sh@42 -- # jq -r . 00:07:23.492 [2024-11-19 07:22:32.462044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.492 [2024-11-19 07:22:32.462316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:07:23.492 [2024-11-19 07:22:32.613793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.750 [2024-11-19 07:22:32.797324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.750 [2024-11-19 07:22:32.797547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.750 [2024-11-19 07:22:32.797750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.750 [2024-11-19 07:22:32.797758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val=0xf 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.750 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.750 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.750 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=decompress 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=software 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=32 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=32 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=1 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val=Yes 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:23.751 07:22:32 -- accel/accel.sh@21 -- # val= 00:07:23.751 07:22:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # IFS=: 00:07:23.751 07:22:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.666 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.666 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.666 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.666 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.666 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.666 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.666 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.666 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@21 -- # val= 00:07:25.667 07:22:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # IFS=: 00:07:25.667 07:22:34 -- accel/accel.sh@20 -- # read -r var val 00:07:25.667 07:22:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.667 07:22:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:25.667 07:22:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.667 00:07:25.667 real 0m4.260s 00:07:25.667 user 0m12.707s 00:07:25.667 sys 0m0.294s 00:07:25.667 07:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.667 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.667 ************************************ 00:07:25.667 END TEST accel_decomp_mcore 00:07:25.667 ************************************ 00:07:25.667 07:22:34 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.667 07:22:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:25.667 07:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.667 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.667 ************************************ 00:07:25.667 START TEST accel_decomp_full_mcore 00:07:25.667 ************************************ 00:07:25.667 07:22:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.667 07:22:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.667 07:22:34 -- accel/accel.sh@17 -- # local accel_module 00:07:25.667 07:22:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.667 07:22:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.667 07:22:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.667 07:22:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.667 07:22:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.667 07:22:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.667 07:22:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.667 07:22:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.667 07:22:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.667 07:22:34 -- accel/accel.sh@42 -- # jq -r . 00:07:25.667 [2024-11-19 07:22:34.610904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.667 [2024-11-19 07:22:34.611007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:07:25.667 [2024-11-19 07:22:34.759861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.928 [2024-11-19 07:22:35.005720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.928 [2024-11-19 07:22:35.006349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.928 [2024-11-19 07:22:35.006849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.928 [2024-11-19 07:22:35.006980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.840 07:22:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.840 00:07:27.840 SPDK Configuration: 00:07:27.840 Core mask: 0xf 00:07:27.840 00:07:27.840 Accel Perf Configuration: 00:07:27.840 Workload Type: decompress 00:07:27.840 Transfer size: 111250 bytes 00:07:27.840 Vector count 1 00:07:27.840 Module: software 00:07:27.840 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.840 Queue depth: 32 00:07:27.840 Allocate depth: 32 00:07:27.840 # threads/core: 1 00:07:27.840 Run time: 1 seconds 00:07:27.840 Verify: Yes 00:07:27.840 00:07:27.840 Running for 1 seconds... 00:07:27.840 00:07:27.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.840 ------------------------------------------------------------------------------------ 00:07:27.840 0,0 4320/s 178 MiB/s 0 0 00:07:27.840 3,0 4320/s 178 MiB/s 0 0 00:07:27.840 2,0 5600/s 231 MiB/s 0 0 00:07:27.840 1,0 4320/s 178 MiB/s 0 0 00:07:27.840 ==================================================================================== 00:07:27.840 Total 18560/s 1969 MiB/s 0 0' 00:07:27.840 07:22:36 -- accel/accel.sh@20 -- # IFS=: 00:07:27.840 07:22:36 -- accel/accel.sh@20 -- # read -r var val 00:07:27.840 07:22:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.840 07:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.840 07:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.840 07:22:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.840 07:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.840 07:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.840 07:22:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.840 07:22:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.840 07:22:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.840 07:22:36 -- accel/accel.sh@42 -- # jq -r . 00:07:27.840 [2024-11-19 07:22:36.794199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.840 [2024-11-19 07:22:36.794286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59743 ] 00:07:27.840 [2024-11-19 07:22:36.936226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.840 [2024-11-19 07:22:37.090727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.840 [2024-11-19 07:22:37.091351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.840 [2024-11-19 07:22:37.091431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.840 [2024-11-19 07:22:37.091451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.101 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.101 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.101 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.101 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.101 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.101 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.101 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.101 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.101 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=0xf 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=decompress 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=software 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=32 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=32 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=1 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val=Yes 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:28.102 07:22:37 -- accel/accel.sh@21 -- # val= 00:07:28.102 07:22:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # IFS=: 00:07:28.102 07:22:37 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@21 -- # val= 00:07:29.491 07:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # IFS=: 00:07:29.491 07:22:38 -- accel/accel.sh@20 -- # read -r var val 00:07:29.491 07:22:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.491 07:22:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.491 07:22:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.491 00:07:29.491 real 0m4.140s 00:07:29.491 user 0m12.423s 00:07:29.491 sys 0m0.298s 00:07:29.491 07:22:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.491 ************************************ 00:07:29.491 END TEST accel_decomp_full_mcore 00:07:29.491 ************************************ 00:07:29.491 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:29.761 07:22:38 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:29.761 07:22:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:29.761 07:22:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.761 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:07:29.761 ************************************ 00:07:29.761 START TEST accel_decomp_mthread 00:07:29.761 ************************************ 00:07:29.761 07:22:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:29.761 07:22:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.761 07:22:38 -- accel/accel.sh@17 -- # local accel_module 00:07:29.761 07:22:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:29.761 07:22:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:29.761 07:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.761 07:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.761 07:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.761 07:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.761 07:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.761 07:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.761 07:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.761 07:22:38 -- accel/accel.sh@42 -- # jq -r . 00:07:29.761 [2024-11-19 07:22:38.784607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.761 [2024-11-19 07:22:38.784687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:07:29.761 [2024-11-19 07:22:38.926288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.023 [2024-11-19 07:22:39.079877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.929 07:22:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.929 00:07:31.929 SPDK Configuration: 00:07:31.929 Core mask: 0x1 00:07:31.929 00:07:31.929 Accel Perf Configuration: 00:07:31.929 Workload Type: decompress 00:07:31.929 Transfer size: 4096 bytes 00:07:31.929 Vector count 1 00:07:31.929 Module: software 00:07:31.929 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.929 Queue depth: 32 00:07:31.929 Allocate depth: 32 00:07:31.929 # threads/core: 2 00:07:31.929 Run time: 1 seconds 00:07:31.929 Verify: Yes 00:07:31.929 00:07:31.929 Running for 1 seconds... 00:07:31.929 00:07:31.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.929 ------------------------------------------------------------------------------------ 00:07:31.929 0,1 39936/s 73 MiB/s 0 0 00:07:31.929 0,0 39840/s 73 MiB/s 0 0 00:07:31.929 ==================================================================================== 00:07:31.929 Total 79776/s 311 MiB/s 0 0' 00:07:31.929 07:22:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.929 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:07:31.929 07:22:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.929 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:07:31.929 07:22:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.929 07:22:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.929 07:22:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.929 07:22:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.929 07:22:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.929 07:22:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.929 07:22:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.929 07:22:40 -- accel/accel.sh@42 -- # jq -r . 00:07:31.929 [2024-11-19 07:22:40.738648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.929 [2024-11-19 07:22:40.738753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:07:31.929 [2024-11-19 07:22:40.886555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.929 [2024-11-19 07:22:41.062837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val=0x1 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val=decompress 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.187 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.187 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.187 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=software 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=32 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=32 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=2 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val=Yes 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:32.188 07:22:41 -- accel/accel.sh@21 -- # val= 00:07:32.188 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:07:32.188 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.558 07:22:42 -- accel/accel.sh@21 -- # val= 00:07:33.558 07:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # IFS=: 00:07:33.558 07:22:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.816 07:22:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.816 ************************************ 00:07:33.816 END TEST accel_decomp_mthread 00:07:33.816 ************************************ 00:07:33.816 07:22:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.816 07:22:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.816 00:07:33.816 real 0m4.060s 00:07:33.816 user 0m3.626s 00:07:33.816 sys 0m0.227s 00:07:33.816 07:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.816 07:22:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.816 07:22:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.816 07:22:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:33.816 07:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.816 07:22:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.816 ************************************ 00:07:33.816 START TEST accel_deomp_full_mthread 00:07:33.816 ************************************ 00:07:33.816 07:22:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.816 07:22:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.816 07:22:42 -- accel/accel.sh@17 -- # local accel_module 00:07:33.816 07:22:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.816 07:22:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.816 07:22:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.816 07:22:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.816 07:22:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.816 07:22:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.816 07:22:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.816 07:22:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.816 07:22:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.816 07:22:42 -- accel/accel.sh@42 -- # jq -r . 00:07:33.816 [2024-11-19 07:22:42.880732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.816 [2024-11-19 07:22:42.880814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:07:33.816 [2024-11-19 07:22:43.025130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.075 [2024-11-19 07:22:43.200497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.978 07:22:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.978 00:07:35.978 SPDK Configuration: 00:07:35.978 Core mask: 0x1 00:07:35.978 00:07:35.978 Accel Perf Configuration: 00:07:35.978 Workload Type: decompress 00:07:35.978 Transfer size: 111250 bytes 00:07:35.978 Vector count 1 00:07:35.978 Module: software 00:07:35.978 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.978 Queue depth: 32 00:07:35.978 Allocate depth: 32 00:07:35.978 # threads/core: 2 00:07:35.978 Run time: 1 seconds 00:07:35.978 Verify: Yes 00:07:35.978 00:07:35.978 Running for 1 seconds... 00:07:35.978 00:07:35.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.978 ------------------------------------------------------------------------------------ 00:07:35.978 0,1 2208/s 91 MiB/s 0 0 00:07:35.979 0,0 2176/s 89 MiB/s 0 0 00:07:35.979 ==================================================================================== 00:07:35.979 Total 4384/s 465 MiB/s 0 0' 00:07:35.979 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:07:35.979 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:07:35.979 07:22:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.979 07:22:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.979 07:22:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.979 07:22:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.979 07:22:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.979 07:22:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.979 07:22:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.979 07:22:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.979 07:22:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.979 07:22:44 -- accel/accel.sh@42 -- # jq -r . 00:07:35.979 [2024-11-19 07:22:44.918907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.979 [2024-11-19 07:22:44.919156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59880 ] 00:07:35.979 [2024-11-19 07:22:45.063920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.237 [2024-11-19 07:22:45.249432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=0x1 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=decompress 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=software 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=32 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=32 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=2 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val=Yes 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:36.237 07:22:45 -- accel/accel.sh@21 -- # val= 00:07:36.237 07:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # IFS=: 00:07:36.237 07:22:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@21 -- # val= 00:07:38.143 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:07:38.143 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:07:38.143 07:22:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.143 07:22:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.143 07:22:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.143 00:07:38.143 real 0m4.190s 00:07:38.143 user 0m3.740s 00:07:38.143 sys 0m0.237s 00:07:38.143 07:22:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.143 07:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.143 ************************************ 00:07:38.143 END TEST accel_deomp_full_mthread 00:07:38.143 ************************************ 00:07:38.143 07:22:47 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:38.143 07:22:47 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:38.143 07:22:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:38.143 07:22:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.143 07:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.144 07:22:47 -- accel/accel.sh@129 -- # build_accel_config 00:07:38.144 07:22:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.144 07:22:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.144 07:22:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.144 07:22:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.144 07:22:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.144 07:22:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.144 07:22:47 -- accel/accel.sh@42 -- # jq -r . 00:07:38.144 ************************************ 00:07:38.144 START TEST accel_dif_functional_tests 00:07:38.144 ************************************ 00:07:38.144 07:22:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:38.144 [2024-11-19 07:22:47.141521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.144 [2024-11-19 07:22:47.141631] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:07:38.144 [2024-11-19 07:22:47.288559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.406 [2024-11-19 07:22:47.438069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.406 [2024-11-19 07:22:47.438577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.406 [2024-11-19 07:22:47.438584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.406 00:07:38.406 00:07:38.406 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.406 http://cunit.sourceforge.net/ 00:07:38.406 00:07:38.406 00:07:38.406 Suite: accel_dif 00:07:38.406 Test: verify: DIF generated, GUARD check ...passed 00:07:38.406 Test: verify: DIF generated, APPTAG check ...passed 00:07:38.406 Test: verify: DIF generated, REFTAG check ...passed 00:07:38.406 Test: verify: DIF not generated, GUARD check ...passed 00:07:38.406 Test: verify: DIF not generated, APPTAG check ...[2024-11-19 07:22:47.612706] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.406 [2024-11-19 07:22:47.612759] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.406 [2024-11-19 07:22:47.612803] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.406 passed 00:07:38.406 Test: verify: DIF not generated, REFTAG check ...passed 00:07:38.406 Test: verify: APPTAG correct, APPTAG check ...[2024-11-19 07:22:47.612859] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.406 [2024-11-19 07:22:47.612885] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.406 [2024-11-19 07:22:47.612903] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.406 passed 00:07:38.406 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:38.406 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:38.406 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-11-19 07:22:47.612992] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:38.406 passed 00:07:38.406 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:38.406 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:38.406 Test: generate copy: DIF generated, GUARD check ...passed 00:07:38.406 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:38.406 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:38.406 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:38.406 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-11-19 07:22:47.613274] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:38.406 passed 00:07:38.406 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:38.406 Test: generate copy: iovecs-len validate ...[2024-11-19 07:22:47.613583] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:07:38.406 Test: generate copy: buffer alignment validate ...passed 00:07:38.406 00:07:38.406 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.406 suites 1 1 n/a 0 0 00:07:38.406 tests 20 20 20 0 0 00:07:38.406 asserts 204 204 204 0 n/a 00:07:38.406 00:07:38.406 Elapsed time = 0.003 seconds 00:07:38.406 with block_size. 00:07:38.979 00:07:38.979 real 0m1.145s 00:07:38.979 user 0m2.041s 00:07:38.979 sys 0m0.155s 00:07:38.979 07:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.979 ************************************ 00:07:38.979 END TEST accel_dif_functional_tests 00:07:38.979 ************************************ 00:07:38.979 07:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.240 ************************************ 00:07:39.240 END TEST accel 00:07:39.240 ************************************ 00:07:39.240 00:07:39.240 real 1m30.392s 00:07:39.240 user 1m38.390s 00:07:39.240 sys 0m6.770s 00:07:39.240 07:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.240 07:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.240 07:22:48 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:39.240 07:22:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.240 07:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.240 07:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.240 ************************************ 00:07:39.240 START TEST accel_rpc 00:07:39.240 ************************************ 00:07:39.240 07:22:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:39.240 * Looking for test storage... 00:07:39.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:39.240 07:22:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.240 07:22:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.240 07:22:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.240 07:22:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.240 07:22:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.240 07:22:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.240 07:22:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.240 07:22:48 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.240 07:22:48 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.240 07:22:48 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.240 07:22:48 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.240 07:22:48 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.240 07:22:48 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.240 07:22:48 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.240 07:22:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.240 07:22:48 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.240 07:22:48 -- scripts/common.sh@344 -- # : 1 00:07:39.240 07:22:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.240 07:22:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.240 07:22:48 -- scripts/common.sh@364 -- # decimal 1 00:07:39.240 07:22:48 -- scripts/common.sh@352 -- # local d=1 00:07:39.240 07:22:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.240 07:22:48 -- scripts/common.sh@354 -- # echo 1 00:07:39.240 07:22:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.240 07:22:48 -- scripts/common.sh@365 -- # decimal 2 00:07:39.240 07:22:48 -- scripts/common.sh@352 -- # local d=2 00:07:39.240 07:22:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.240 07:22:48 -- scripts/common.sh@354 -- # echo 2 00:07:39.240 07:22:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.241 07:22:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.241 07:22:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.241 07:22:48 -- scripts/common.sh@367 -- # return 0 00:07:39.241 07:22:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.241 07:22:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.241 --rc genhtml_branch_coverage=1 00:07:39.241 --rc genhtml_function_coverage=1 00:07:39.241 --rc genhtml_legend=1 00:07:39.241 --rc geninfo_all_blocks=1 00:07:39.241 --rc geninfo_unexecuted_blocks=1 00:07:39.241 00:07:39.241 ' 00:07:39.241 07:22:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.241 --rc genhtml_branch_coverage=1 00:07:39.241 --rc genhtml_function_coverage=1 00:07:39.241 --rc genhtml_legend=1 00:07:39.241 --rc geninfo_all_blocks=1 00:07:39.241 --rc geninfo_unexecuted_blocks=1 00:07:39.241 00:07:39.241 ' 00:07:39.241 07:22:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.241 --rc genhtml_branch_coverage=1 00:07:39.241 --rc genhtml_function_coverage=1 00:07:39.241 --rc genhtml_legend=1 00:07:39.241 --rc geninfo_all_blocks=1 00:07:39.241 --rc geninfo_unexecuted_blocks=1 00:07:39.241 00:07:39.241 ' 00:07:39.241 07:22:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.241 --rc genhtml_branch_coverage=1 00:07:39.241 --rc genhtml_function_coverage=1 00:07:39.241 --rc genhtml_legend=1 00:07:39.241 --rc geninfo_all_blocks=1 00:07:39.241 --rc geninfo_unexecuted_blocks=1 00:07:39.241 00:07:39.241 ' 00:07:39.241 07:22:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.241 07:22:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60000 00:07:39.241 07:22:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 60000 00:07:39.241 07:22:48 -- common/autotest_common.sh@829 -- # '[' -z 60000 ']' 00:07:39.241 07:22:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.241 07:22:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.241 07:22:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.241 07:22:48 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:39.241 07:22:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.241 07:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.502 [2024-11-19 07:22:48.503460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.502 [2024-11-19 07:22:48.503727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:07:39.502 [2024-11-19 07:22:48.649287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.763 [2024-11-19 07:22:48.827994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.763 [2024-11-19 07:22:48.828374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.369 07:22:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.369 07:22:49 -- common/autotest_common.sh@862 -- # return 0 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.369 07:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.369 07:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.369 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.369 ************************************ 00:07:40.369 START TEST accel_assign_opcode 00:07:40.369 ************************************ 00:07:40.369 07:22:49 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.369 07:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.369 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.369 [2024-11-19 07:22:49.329029] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.369 07:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.369 07:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.369 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.369 [2024-11-19 07:22:49.336986] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.369 07:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.369 07:22:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.369 07:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.369 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.627 07:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.627 07:22:49 -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.627 07:22:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.627 07:22:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.627 07:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.627 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.885 07:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.885 software 00:07:40.885 00:07:40.885 real 0m0.583s 00:07:40.885 user 0m0.036s 00:07:40.885 sys 0m0.008s 00:07:40.885 ************************************ 00:07:40.885 END TEST accel_assign_opcode 00:07:40.885 ************************************ 00:07:40.885 07:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.885 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:40.885 07:22:49 -- accel/accel_rpc.sh@55 -- # killprocess 60000 00:07:40.885 07:22:49 -- common/autotest_common.sh@936 -- # '[' -z 60000 ']' 00:07:40.885 07:22:49 -- common/autotest_common.sh@940 -- # kill -0 60000 00:07:40.885 07:22:49 -- common/autotest_common.sh@941 -- # uname 00:07:40.885 07:22:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.885 07:22:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60000 00:07:40.885 killing process with pid 60000 00:07:40.885 07:22:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.885 07:22:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.885 07:22:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60000' 00:07:40.885 07:22:49 -- common/autotest_common.sh@955 -- # kill 60000 00:07:40.885 07:22:49 -- common/autotest_common.sh@960 -- # wait 60000 00:07:42.260 00:07:42.260 real 0m3.191s 00:07:42.260 user 0m3.149s 00:07:42.260 sys 0m0.395s 00:07:42.260 ************************************ 00:07:42.260 END TEST accel_rpc 00:07:42.260 ************************************ 00:07:42.260 07:22:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.260 07:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:42.519 07:22:51 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:42.519 07:22:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.519 07:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.519 07:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:42.519 ************************************ 00:07:42.519 START TEST app_cmdline 00:07:42.519 ************************************ 00:07:42.519 07:22:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:42.519 * Looking for test storage... 00:07:42.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:42.519 07:22:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:42.519 07:22:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:42.519 07:22:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:42.519 07:22:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:42.519 07:22:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:42.519 07:22:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:42.519 07:22:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:42.519 07:22:51 -- scripts/common.sh@335 -- # IFS=.-: 00:07:42.519 07:22:51 -- scripts/common.sh@335 -- # read -ra ver1 00:07:42.519 07:22:51 -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.519 07:22:51 -- scripts/common.sh@336 -- # read -ra ver2 00:07:42.519 07:22:51 -- scripts/common.sh@337 -- # local 'op=<' 00:07:42.519 07:22:51 -- scripts/common.sh@339 -- # ver1_l=2 00:07:42.519 07:22:51 -- scripts/common.sh@340 -- # ver2_l=1 00:07:42.519 07:22:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:42.519 07:22:51 -- scripts/common.sh@343 -- # case "$op" in 00:07:42.519 07:22:51 -- scripts/common.sh@344 -- # : 1 00:07:42.519 07:22:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:42.519 07:22:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.519 07:22:51 -- scripts/common.sh@364 -- # decimal 1 00:07:42.519 07:22:51 -- scripts/common.sh@352 -- # local d=1 00:07:42.519 07:22:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.519 07:22:51 -- scripts/common.sh@354 -- # echo 1 00:07:42.519 07:22:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:42.519 07:22:51 -- scripts/common.sh@365 -- # decimal 2 00:07:42.519 07:22:51 -- scripts/common.sh@352 -- # local d=2 00:07:42.519 07:22:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.519 07:22:51 -- scripts/common.sh@354 -- # echo 2 00:07:42.519 07:22:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:42.519 07:22:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:42.519 07:22:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:42.519 07:22:51 -- scripts/common.sh@367 -- # return 0 00:07:42.519 07:22:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.519 07:22:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:42.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.519 --rc genhtml_branch_coverage=1 00:07:42.519 --rc genhtml_function_coverage=1 00:07:42.519 --rc genhtml_legend=1 00:07:42.519 --rc geninfo_all_blocks=1 00:07:42.519 --rc geninfo_unexecuted_blocks=1 00:07:42.519 00:07:42.519 ' 00:07:42.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.519 07:22:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:42.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.519 --rc genhtml_branch_coverage=1 00:07:42.519 --rc genhtml_function_coverage=1 00:07:42.519 --rc genhtml_legend=1 00:07:42.519 --rc geninfo_all_blocks=1 00:07:42.519 --rc geninfo_unexecuted_blocks=1 00:07:42.519 00:07:42.519 ' 00:07:42.519 07:22:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:42.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.519 --rc genhtml_branch_coverage=1 00:07:42.519 --rc genhtml_function_coverage=1 00:07:42.519 --rc genhtml_legend=1 00:07:42.519 --rc geninfo_all_blocks=1 00:07:42.519 --rc geninfo_unexecuted_blocks=1 00:07:42.519 00:07:42.519 ' 00:07:42.519 07:22:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:42.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.519 --rc genhtml_branch_coverage=1 00:07:42.519 --rc genhtml_function_coverage=1 00:07:42.519 --rc genhtml_legend=1 00:07:42.519 --rc geninfo_all_blocks=1 00:07:42.519 --rc geninfo_unexecuted_blocks=1 00:07:42.519 00:07:42.519 ' 00:07:42.519 07:22:51 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:42.519 07:22:51 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60113 00:07:42.519 07:22:51 -- app/cmdline.sh@18 -- # waitforlisten 60113 00:07:42.519 07:22:51 -- common/autotest_common.sh@829 -- # '[' -z 60113 ']' 00:07:42.519 07:22:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.519 07:22:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.519 07:22:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.519 07:22:51 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:42.519 07:22:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.519 07:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:42.519 [2024-11-19 07:22:51.743148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.519 [2024-11-19 07:22:51.743422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60113 ] 00:07:42.778 [2024-11-19 07:22:51.893593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.037 [2024-11-19 07:22:52.068510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.037 [2024-11-19 07:22:52.068807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.411 07:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.411 07:22:53 -- common/autotest_common.sh@862 -- # return 0 00:07:44.411 07:22:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:44.411 { 00:07:44.411 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:44.411 "fields": { 00:07:44.411 "major": 24, 00:07:44.411 "minor": 1, 00:07:44.411 "patch": 1, 00:07:44.411 "suffix": "-pre", 00:07:44.411 "commit": "c13c99a5e" 00:07:44.411 } 00:07:44.411 } 00:07:44.411 07:22:53 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.411 07:22:53 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.411 07:22:53 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.411 07:22:53 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.411 07:22:53 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.411 07:22:53 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.411 07:22:53 -- app/cmdline.sh@26 -- # sort 00:07:44.411 07:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.411 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:44.411 07:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.411 07:22:53 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.411 07:22:53 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.411 07:22:53 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.411 07:22:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:44.411 07:22:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.411 07:22:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.411 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.411 07:22:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.412 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.412 07:22:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.412 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.412 07:22:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.412 07:22:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:44.412 07:22:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.412 request: 00:07:44.412 { 00:07:44.412 "method": "env_dpdk_get_mem_stats", 00:07:44.412 "req_id": 1 00:07:44.412 } 00:07:44.412 Got JSON-RPC error response 00:07:44.412 response: 00:07:44.412 { 00:07:44.412 "code": -32601, 00:07:44.412 "message": "Method not found" 00:07:44.412 } 00:07:44.412 07:22:53 -- common/autotest_common.sh@653 -- # es=1 00:07:44.412 07:22:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.412 07:22:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.412 07:22:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.412 07:22:53 -- app/cmdline.sh@1 -- # killprocess 60113 00:07:44.412 07:22:53 -- common/autotest_common.sh@936 -- # '[' -z 60113 ']' 00:07:44.412 07:22:53 -- common/autotest_common.sh@940 -- # kill -0 60113 00:07:44.412 07:22:53 -- common/autotest_common.sh@941 -- # uname 00:07:44.412 07:22:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.412 07:22:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60113 00:07:44.412 killing process with pid 60113 00:07:44.412 07:22:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:44.412 07:22:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:44.412 07:22:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60113' 00:07:44.412 07:22:53 -- common/autotest_common.sh@955 -- # kill 60113 00:07:44.412 07:22:53 -- common/autotest_common.sh@960 -- # wait 60113 00:07:45.785 00:07:45.785 real 0m3.319s 00:07:45.785 user 0m3.732s 00:07:45.785 sys 0m0.429s 00:07:45.785 07:22:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.785 07:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:45.785 ************************************ 00:07:45.785 END TEST app_cmdline 00:07:45.785 ************************************ 00:07:45.785 07:22:54 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.785 07:22:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:45.785 07:22:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.785 07:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:45.785 ************************************ 00:07:45.785 START TEST version 00:07:45.785 ************************************ 00:07:45.785 07:22:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.785 * Looking for test storage... 00:07:45.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:45.785 07:22:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:45.785 07:22:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:45.785 07:22:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.043 07:22:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.043 07:22:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.043 07:22:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.043 07:22:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.043 07:22:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.043 07:22:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.043 07:22:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.043 07:22:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.043 07:22:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.043 07:22:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.043 07:22:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.043 07:22:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.043 07:22:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.043 07:22:55 -- scripts/common.sh@344 -- # : 1 00:07:46.043 07:22:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.043 07:22:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.043 07:22:55 -- scripts/common.sh@364 -- # decimal 1 00:07:46.043 07:22:55 -- scripts/common.sh@352 -- # local d=1 00:07:46.043 07:22:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.043 07:22:55 -- scripts/common.sh@354 -- # echo 1 00:07:46.043 07:22:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.043 07:22:55 -- scripts/common.sh@365 -- # decimal 2 00:07:46.043 07:22:55 -- scripts/common.sh@352 -- # local d=2 00:07:46.043 07:22:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.043 07:22:55 -- scripts/common.sh@354 -- # echo 2 00:07:46.043 07:22:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.043 07:22:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.043 07:22:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.043 07:22:55 -- scripts/common.sh@367 -- # return 0 00:07:46.043 07:22:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.043 07:22:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.043 --rc genhtml_branch_coverage=1 00:07:46.043 --rc genhtml_function_coverage=1 00:07:46.043 --rc genhtml_legend=1 00:07:46.044 --rc geninfo_all_blocks=1 00:07:46.044 --rc geninfo_unexecuted_blocks=1 00:07:46.044 00:07:46.044 ' 00:07:46.044 07:22:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.044 --rc genhtml_branch_coverage=1 00:07:46.044 --rc genhtml_function_coverage=1 00:07:46.044 --rc genhtml_legend=1 00:07:46.044 --rc geninfo_all_blocks=1 00:07:46.044 --rc geninfo_unexecuted_blocks=1 00:07:46.044 00:07:46.044 ' 00:07:46.044 07:22:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.044 --rc genhtml_branch_coverage=1 00:07:46.044 --rc genhtml_function_coverage=1 00:07:46.044 --rc genhtml_legend=1 00:07:46.044 --rc geninfo_all_blocks=1 00:07:46.044 --rc geninfo_unexecuted_blocks=1 00:07:46.044 00:07:46.044 ' 00:07:46.044 07:22:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.044 --rc genhtml_branch_coverage=1 00:07:46.044 --rc genhtml_function_coverage=1 00:07:46.044 --rc genhtml_legend=1 00:07:46.044 --rc geninfo_all_blocks=1 00:07:46.044 --rc geninfo_unexecuted_blocks=1 00:07:46.044 00:07:46.044 ' 00:07:46.044 07:22:55 -- app/version.sh@17 -- # get_header_version major 00:07:46.044 07:22:55 -- app/version.sh@14 -- # cut -f2 00:07:46.044 07:22:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.044 07:22:55 -- app/version.sh@14 -- # tr -d '"' 00:07:46.044 07:22:55 -- app/version.sh@17 -- # major=24 00:07:46.044 07:22:55 -- app/version.sh@18 -- # get_header_version minor 00:07:46.044 07:22:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.044 07:22:55 -- app/version.sh@14 -- # cut -f2 00:07:46.044 07:22:55 -- app/version.sh@14 -- # tr -d '"' 00:07:46.044 07:22:55 -- app/version.sh@18 -- # minor=1 00:07:46.044 07:22:55 -- app/version.sh@19 -- # get_header_version patch 00:07:46.044 07:22:55 -- app/version.sh@14 -- # cut -f2 00:07:46.044 07:22:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.044 07:22:55 -- app/version.sh@14 -- # tr -d '"' 00:07:46.044 07:22:55 -- app/version.sh@19 -- # patch=1 00:07:46.044 07:22:55 -- app/version.sh@20 -- # get_header_version suffix 00:07:46.044 07:22:55 -- app/version.sh@14 -- # cut -f2 00:07:46.044 07:22:55 -- app/version.sh@14 -- # tr -d '"' 00:07:46.044 07:22:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.044 07:22:55 -- app/version.sh@20 -- # suffix=-pre 00:07:46.044 07:22:55 -- app/version.sh@22 -- # version=24.1 00:07:46.044 07:22:55 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.044 07:22:55 -- app/version.sh@25 -- # version=24.1.1 00:07:46.044 07:22:55 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:46.044 07:22:55 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:46.044 07:22:55 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.044 07:22:55 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:46.044 07:22:55 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:46.044 ************************************ 00:07:46.044 END TEST version 00:07:46.044 ************************************ 00:07:46.044 00:07:46.044 real 0m0.225s 00:07:46.044 user 0m0.155s 00:07:46.044 sys 0m0.096s 00:07:46.044 07:22:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.044 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.044 07:22:55 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:46.044 07:22:55 -- spdk/autotest.sh@191 -- # uname -s 00:07:46.044 07:22:55 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:46.044 07:22:55 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:46.044 07:22:55 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:46.044 07:22:55 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:46.044 07:22:55 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:46.044 07:22:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:46.044 07:22:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.044 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.044 ************************************ 00:07:46.044 START TEST blockdev_nvme 00:07:46.044 ************************************ 00:07:46.044 07:22:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:46.044 * Looking for test storage... 00:07:46.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:46.044 07:22:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:46.044 07:22:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:46.044 07:22:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.044 07:22:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.044 07:22:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.044 07:22:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.044 07:22:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.044 07:22:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.044 07:22:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.044 07:22:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.044 07:22:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.044 07:22:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.044 07:22:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.044 07:22:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.044 07:22:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.044 07:22:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.044 07:22:55 -- scripts/common.sh@344 -- # : 1 00:07:46.044 07:22:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.044 07:22:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.044 07:22:55 -- scripts/common.sh@364 -- # decimal 1 00:07:46.044 07:22:55 -- scripts/common.sh@352 -- # local d=1 00:07:46.044 07:22:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.044 07:22:55 -- scripts/common.sh@354 -- # echo 1 00:07:46.044 07:22:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.044 07:22:55 -- scripts/common.sh@365 -- # decimal 2 00:07:46.044 07:22:55 -- scripts/common.sh@352 -- # local d=2 00:07:46.044 07:22:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.044 07:22:55 -- scripts/common.sh@354 -- # echo 2 00:07:46.302 07:22:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.302 07:22:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.302 07:22:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.302 07:22:55 -- scripts/common.sh@367 -- # return 0 00:07:46.302 07:22:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.302 07:22:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.302 --rc genhtml_branch_coverage=1 00:07:46.302 --rc genhtml_function_coverage=1 00:07:46.302 --rc genhtml_legend=1 00:07:46.302 --rc geninfo_all_blocks=1 00:07:46.302 --rc geninfo_unexecuted_blocks=1 00:07:46.302 00:07:46.302 ' 00:07:46.302 07:22:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.302 --rc genhtml_branch_coverage=1 00:07:46.302 --rc genhtml_function_coverage=1 00:07:46.302 --rc genhtml_legend=1 00:07:46.302 --rc geninfo_all_blocks=1 00:07:46.302 --rc geninfo_unexecuted_blocks=1 00:07:46.302 00:07:46.302 ' 00:07:46.302 07:22:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.302 --rc genhtml_branch_coverage=1 00:07:46.302 --rc genhtml_function_coverage=1 00:07:46.302 --rc genhtml_legend=1 00:07:46.302 --rc geninfo_all_blocks=1 00:07:46.302 --rc geninfo_unexecuted_blocks=1 00:07:46.302 00:07:46.302 ' 00:07:46.302 07:22:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.302 --rc genhtml_branch_coverage=1 00:07:46.302 --rc genhtml_function_coverage=1 00:07:46.302 --rc genhtml_legend=1 00:07:46.303 --rc geninfo_all_blocks=1 00:07:46.303 --rc geninfo_unexecuted_blocks=1 00:07:46.303 00:07:46.303 ' 00:07:46.303 07:22:55 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:46.303 07:22:55 -- bdev/nbd_common.sh@6 -- # set -e 00:07:46.303 07:22:55 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:46.303 07:22:55 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:46.303 07:22:55 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:46.303 07:22:55 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:46.303 07:22:55 -- bdev/blockdev.sh@18 -- # : 00:07:46.303 07:22:55 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:46.303 07:22:55 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:46.303 07:22:55 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:46.303 07:22:55 -- bdev/blockdev.sh@672 -- # uname -s 00:07:46.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.303 07:22:55 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:46.303 07:22:55 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:46.303 07:22:55 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:46.303 07:22:55 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:46.303 07:22:55 -- bdev/blockdev.sh@682 -- # dek= 00:07:46.303 07:22:55 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:46.303 07:22:55 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:46.303 07:22:55 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:46.303 07:22:55 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:46.303 07:22:55 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:46.303 07:22:55 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:46.303 07:22:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60295 00:07:46.303 07:22:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.303 07:22:55 -- bdev/blockdev.sh@47 -- # waitforlisten 60295 00:07:46.303 07:22:55 -- common/autotest_common.sh@829 -- # '[' -z 60295 ']' 00:07:46.303 07:22:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.303 07:22:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.303 07:22:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.303 07:22:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.303 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 07:22:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:46.303 [2024-11-19 07:22:55.373102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.303 [2024-11-19 07:22:55.373388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:07:46.303 [2024-11-19 07:22:55.516778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.561 [2024-11-19 07:22:55.691423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.561 [2024-11-19 07:22:55.691624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.933 07:22:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.933 07:22:56 -- common/autotest_common.sh@862 -- # return 0 00:07:47.933 07:22:56 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:47.933 07:22:56 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:47.934 07:22:56 -- bdev/blockdev.sh@79 -- # local json 00:07:47.934 07:22:56 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:47.934 07:22:56 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.934 07:22:56 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:47.934 07:22:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.934 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:48.193 07:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.193 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@738 -- # cat 00:07:48.193 07:22:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:48.193 07:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.193 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:48.193 07:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.193 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:48.193 07:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.193 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:48.193 07:22:57 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:48.193 07:22:57 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:48.193 07:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.193 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 07:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.193 07:22:57 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:48.193 07:22:57 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:48.194 07:22:57 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6151c9ed-0e92-4722-8d56-7824d47492e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6151c9ed-0e92-4722-8d56-7824d47492e6",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c954578a-6b26-4830-a341-6edc00e07328"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c954578a-6b26-4830-a341-6edc00e07328",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e6d16ea1-fbc5-406c-aa1a-e01a73c0416c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e6d16ea1-fbc5-406c-aa1a-e01a73c0416c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6f7da180-4de7-4cb3-85ec-56a67174cd8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f7da180-4de7-4cb3-85ec-56a67174cd8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d723dfc2-7931-48fb-a04e-3ecfd67a90a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d723dfc2-7931-48fb-a04e-3ecfd67a90a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "478e783f-d46c-4c21-a7a4-b569dc05d0c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "478e783f-d46c-4c21-a7a4-b569dc05d0c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:48.194 07:22:57 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:48.194 07:22:57 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:48.194 07:22:57 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:48.194 07:22:57 -- bdev/blockdev.sh@752 -- # killprocess 60295 00:07:48.194 07:22:57 -- common/autotest_common.sh@936 -- # '[' -z 60295 ']' 00:07:48.194 07:22:57 -- common/autotest_common.sh@940 -- # kill -0 60295 00:07:48.194 07:22:57 -- common/autotest_common.sh@941 -- # uname 00:07:48.194 07:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.194 07:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60295 00:07:48.194 killing process with pid 60295 00:07:48.194 07:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.194 07:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.194 07:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60295' 00:07:48.194 07:22:57 -- common/autotest_common.sh@955 -- # kill 60295 00:07:48.194 07:22:57 -- common/autotest_common.sh@960 -- # wait 60295 00:07:49.568 07:22:58 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:49.568 07:22:58 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:49.568 07:22:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:49.568 07:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.568 07:22:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.568 ************************************ 00:07:49.568 START TEST bdev_hello_world 00:07:49.568 ************************************ 00:07:49.568 07:22:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:49.568 [2024-11-19 07:22:58.798784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.568 [2024-11-19 07:22:58.799027] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:07:49.826 [2024-11-19 07:22:58.946402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.083 [2024-11-19 07:22:59.098759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.340 [2024-11-19 07:22:59.570870] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:50.340 [2024-11-19 07:22:59.570910] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:50.341 [2024-11-19 07:22:59.570926] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:50.341 [2024-11-19 07:22:59.572884] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:50.341 [2024-11-19 07:22:59.573240] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:50.341 [2024-11-19 07:22:59.573257] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:50.341 [2024-11-19 07:22:59.573595] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:50.341 00:07:50.341 [2024-11-19 07:22:59.573610] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:51.273 00:07:51.273 real 0m1.468s 00:07:51.273 user 0m1.204s 00:07:51.273 sys 0m0.158s 00:07:51.273 07:23:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.273 07:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.273 ************************************ 00:07:51.273 END TEST bdev_hello_world 00:07:51.273 ************************************ 00:07:51.273 07:23:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:51.273 07:23:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.273 07:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.273 07:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.273 ************************************ 00:07:51.273 START TEST bdev_bounds 00:07:51.273 ************************************ 00:07:51.273 07:23:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:51.273 07:23:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=60418 00:07:51.273 07:23:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.273 07:23:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60418' 00:07:51.273 Process bdevio pid: 60418 00:07:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.273 07:23:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:51.273 07:23:00 -- bdev/blockdev.sh@291 -- # waitforlisten 60418 00:07:51.273 07:23:00 -- common/autotest_common.sh@829 -- # '[' -z 60418 ']' 00:07:51.273 07:23:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.273 07:23:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.273 07:23:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.273 07:23:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.273 07:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.273 [2024-11-19 07:23:00.304265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.273 [2024-11-19 07:23:00.304378] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60418 ] 00:07:51.273 [2024-11-19 07:23:00.452168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.531 [2024-11-19 07:23:00.605762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.531 [2024-11-19 07:23:00.606079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.531 [2024-11-19 07:23:00.606030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.097 07:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.097 07:23:01 -- common/autotest_common.sh@862 -- # return 0 00:07:52.097 07:23:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:52.097 I/O targets: 00:07:52.097 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:52.097 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:52.097 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:52.097 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:52.097 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:52.097 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:52.097 00:07:52.097 00:07:52.097 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.097 http://cunit.sourceforge.net/ 00:07:52.097 00:07:52.097 00:07:52.097 Suite: bdevio tests on: Nvme3n1 00:07:52.097 Test: blockdev write read block ...passed 00:07:52.097 Test: blockdev write zeroes read block ...passed 00:07:52.097 Test: blockdev write zeroes read no split ...passed 00:07:52.097 Test: blockdev write zeroes read split ...passed 00:07:52.097 Test: blockdev write zeroes read split partial ...passed 00:07:52.097 Test: blockdev reset ...[2024-11-19 07:23:01.279797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:52.097 passed 00:07:52.097 Test: blockdev write read 8 blocks ...[2024-11-19 07:23:01.282567] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.097 passed 00:07:52.097 Test: blockdev write read size > 128k ...passed 00:07:52.097 Test: blockdev write read invalid size ...passed 00:07:52.097 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.097 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.097 Test: blockdev write read max offset ...passed 00:07:52.097 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.097 Test: blockdev writev readv 8 blocks ...passed 00:07:52.097 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.097 Test: blockdev writev readv block ...passed 00:07:52.097 Test: blockdev writev readv size > 128k ...passed 00:07:52.097 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.097 Test: blockdev comparev and writev ...[2024-11-19 07:23:01.289090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f00e000 len:0x1000 00:07:52.097 [2024-11-19 07:23:01.289332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.097 passed 00:07:52.097 Test: blockdev nvme passthru rw ...passed 00:07:52.097 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:23:01.290252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.097 passed 00:07:52.097 Test: blockdev nvme admin passthru ...[2024-11-19 07:23:01.290307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.097 passed 00:07:52.097 Test: blockdev copy ...passed 00:07:52.097 Suite: bdevio tests on: Nvme2n3 00:07:52.097 Test: blockdev write read block ...passed 00:07:52.097 Test: blockdev write zeroes read block ...passed 00:07:52.097 Test: blockdev write zeroes read no split ...passed 00:07:52.403 Test: blockdev write zeroes read split ...passed 00:07:52.403 Test: blockdev write zeroes read split partial ...passed 00:07:52.403 Test: blockdev reset ...[2024-11-19 07:23:01.375412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:52.403 passed 00:07:52.403 Test: blockdev write read 8 blocks ...[2024-11-19 07:23:01.378204] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.403 passed 00:07:52.403 Test: blockdev write read size > 128k ...passed 00:07:52.403 Test: blockdev write read invalid size ...passed 00:07:52.403 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.403 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.403 Test: blockdev write read max offset ...passed 00:07:52.403 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.403 Test: blockdev writev readv 8 blocks ...passed 00:07:52.403 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.403 Test: blockdev writev readv block ...passed 00:07:52.403 Test: blockdev writev readv size > 128k ...passed 00:07:52.403 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.403 Test: blockdev comparev and writev ...[2024-11-19 07:23:01.384074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f00a000 len:0x1000 00:07:52.403 [2024-11-19 07:23:01.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.403 passed 00:07:52.403 Test: blockdev nvme passthru rw ...passed 00:07:52.403 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:23:01.385254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.403 [2024-11-19 07:23:01.385342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.403 passed 00:07:52.403 Test: blockdev nvme admin passthru ...passed 00:07:52.403 Test: blockdev copy ...passed 00:07:52.403 Suite: bdevio tests on: Nvme2n2 00:07:52.403 Test: blockdev write read block ...passed 00:07:52.403 Test: blockdev write zeroes read block ...passed 00:07:52.403 Test: blockdev write zeroes read no split ...passed 00:07:52.403 Test: blockdev write zeroes read split ...passed 00:07:52.403 Test: blockdev write zeroes read split partial ...passed 00:07:52.403 Test: blockdev reset ...[2024-11-19 07:23:01.449875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:52.403 passed 00:07:52.403 Test: blockdev write read 8 blocks ...[2024-11-19 07:23:01.452553] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.403 passed 00:07:52.403 Test: blockdev write read size > 128k ...passed 00:07:52.403 Test: blockdev write read invalid size ...passed 00:07:52.403 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.403 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.403 Test: blockdev write read max offset ...passed 00:07:52.403 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.403 Test: blockdev writev readv 8 blocks ...passed 00:07:52.403 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.403 Test: blockdev writev readv block ...passed 00:07:52.403 Test: blockdev writev readv size > 128k ...passed 00:07:52.403 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.403 Test: blockdev comparev and writev ...passed 00:07:52.403 Test: blockdev nvme passthru rw ...[2024-11-19 07:23:01.458274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f406000 len:0x1000 00:07:52.403 [2024-11-19 07:23:01.458311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.403 passed 00:07:52.403 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.403 Test: blockdev nvme admin passthru ...[2024-11-19 07:23:01.458879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.403 [2024-11-19 07:23:01.458901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.404 passed 00:07:52.404 Test: blockdev copy ...passed 00:07:52.404 Suite: bdevio tests on: Nvme2n1 00:07:52.404 Test: blockdev write read block ...passed 00:07:52.404 Test: blockdev write zeroes read block ...passed 00:07:52.404 Test: blockdev write zeroes read no split ...passed 00:07:52.404 Test: blockdev write zeroes read split ...passed 00:07:52.404 Test: blockdev write zeroes read split partial ...passed 00:07:52.404 Test: blockdev reset ...[2024-11-19 07:23:01.509933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:52.404 [2024-11-19 07:23:01.512612] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.404 passed 00:07:52.404 Test: blockdev write read 8 blocks ...passed 00:07:52.404 Test: blockdev write read size > 128k ...passed 00:07:52.404 Test: blockdev write read invalid size ...passed 00:07:52.404 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.404 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.404 Test: blockdev write read max offset ...passed 00:07:52.404 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.404 Test: blockdev writev readv 8 blocks ...passed 00:07:52.404 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.404 Test: blockdev writev readv block ...passed 00:07:52.404 Test: blockdev writev readv size > 128k ...passed 00:07:52.404 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.404 Test: blockdev comparev and writev ...passed 00:07:52.404 Test: blockdev nvme passthru rw ...[2024-11-19 07:23:01.517843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f401000 len:0x1000 00:07:52.404 [2024-11-19 07:23:01.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.404 passed 00:07:52.404 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.404 Test: blockdev nvme admin passthru ...[2024-11-19 07:23:01.518399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.404 [2024-11-19 07:23:01.518423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.404 passed 00:07:52.404 Test: blockdev copy ...passed 00:07:52.404 Suite: bdevio tests on: Nvme1n1 00:07:52.404 Test: blockdev write read block ...passed 00:07:52.404 Test: blockdev write zeroes read block ...passed 00:07:52.404 Test: blockdev write zeroes read no split ...passed 00:07:52.404 Test: blockdev write zeroes read split ...passed 00:07:52.404 Test: blockdev write zeroes read split partial ...passed 00:07:52.404 Test: blockdev reset ...[2024-11-19 07:23:01.573731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:52.404 passed 00:07:52.404 Test: blockdev write read 8 blocks ...[2024-11-19 07:23:01.576254] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.404 passed 00:07:52.404 Test: blockdev write read size > 128k ...passed 00:07:52.404 Test: blockdev write read invalid size ...passed 00:07:52.404 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.404 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.404 Test: blockdev write read max offset ...passed 00:07:52.404 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.404 Test: blockdev writev readv 8 blocks ...passed 00:07:52.404 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.404 Test: blockdev writev readv block ...passed 00:07:52.404 Test: blockdev writev readv size > 128k ...passed 00:07:52.404 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.404 Test: blockdev comparev and writev ...[2024-11-19 07:23:01.581560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a206000 len:0x1000 00:07:52.404 [2024-11-19 07:23:01.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.404 passed 00:07:52.404 Test: blockdev nvme passthru rw ...passed 00:07:52.404 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.404 Test: blockdev nvme admin passthru ...[2024-11-19 07:23:01.582116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.404 [2024-11-19 07:23:01.582132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.404 passed 00:07:52.404 Test: blockdev copy ...passed 00:07:52.404 Suite: bdevio tests on: Nvme0n1 00:07:52.404 Test: blockdev write read block ...passed 00:07:52.404 Test: blockdev write zeroes read block ...passed 00:07:52.682 Test: blockdev write zeroes read no split ...passed 00:07:52.682 Test: blockdev write zeroes read split ...passed 00:07:52.682 Test: blockdev write zeroes read split partial ...passed 00:07:52.682 Test: blockdev reset ...[2024-11-19 07:23:01.695278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:52.682 [2024-11-19 07:23:01.697825] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.682 passed 00:07:52.682 Test: blockdev write read 8 blocks ...passed 00:07:52.682 Test: blockdev write read size > 128k ...passed 00:07:52.682 Test: blockdev write read invalid size ...passed 00:07:52.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.682 Test: blockdev write read max offset ...passed 00:07:52.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.682 Test: blockdev writev readv 8 blocks ...passed 00:07:52.682 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.682 Test: blockdev writev readv block ...passed 00:07:52.682 Test: blockdev writev readv size > 128k ...passed 00:07:52.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.682 Test: blockdev comparev and writev ...passed 00:07:52.682 Test: blockdev nvme passthru rw ...[2024-11-19 07:23:01.703125] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:52.682 separate metadata which is not supported yet. 00:07:52.682 passed 00:07:52.682 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:23:01.703490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:52.682 [2024-11-19 07:23:01.703516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:52.682 passed 00:07:52.682 Test: blockdev nvme admin passthru ...passed 00:07:52.682 Test: blockdev copy ...passed 00:07:52.682 00:07:52.682 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.682 suites 6 6 n/a 0 0 00:07:52.682 tests 138 138 138 0 0 00:07:52.682 asserts 893 893 893 0 n/a 00:07:52.682 00:07:52.682 Elapsed time = 1.224 seconds 00:07:52.682 0 00:07:52.682 07:23:01 -- bdev/blockdev.sh@293 -- # killprocess 60418 00:07:52.682 07:23:01 -- common/autotest_common.sh@936 -- # '[' -z 60418 ']' 00:07:52.682 07:23:01 -- common/autotest_common.sh@940 -- # kill -0 60418 00:07:52.682 07:23:01 -- common/autotest_common.sh@941 -- # uname 00:07:52.682 07:23:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:52.682 07:23:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60418 00:07:52.682 07:23:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:52.682 07:23:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:52.682 killing process with pid 60418 00:07:52.682 07:23:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60418' 00:07:52.682 07:23:01 -- common/autotest_common.sh@955 -- # kill 60418 00:07:52.682 07:23:01 -- common/autotest_common.sh@960 -- # wait 60418 00:07:53.615 07:23:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:53.615 00:07:53.615 real 0m2.433s 00:07:53.615 user 0m6.033s 00:07:53.615 sys 0m0.284s 00:07:53.615 07:23:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.615 ************************************ 00:07:53.615 END TEST bdev_bounds 00:07:53.615 ************************************ 00:07:53.615 07:23:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.615 07:23:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:53.615 07:23:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:53.615 07:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.615 07:23:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.615 ************************************ 00:07:53.616 START TEST bdev_nbd 00:07:53.616 ************************************ 00:07:53.616 07:23:02 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:53.616 07:23:02 -- bdev/blockdev.sh@298 -- # uname -s 00:07:53.616 07:23:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:53.616 07:23:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.616 07:23:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.616 07:23:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:53.616 07:23:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:53.616 07:23:02 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:53.616 07:23:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:53.616 07:23:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:53.616 07:23:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:53.616 07:23:02 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:53.616 07:23:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:53.616 07:23:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:53.616 07:23:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:53.616 07:23:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:53.616 07:23:02 -- bdev/blockdev.sh@316 -- # nbd_pid=60477 00:07:53.616 07:23:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:53.616 07:23:02 -- bdev/blockdev.sh@318 -- # waitforlisten 60477 /var/tmp/spdk-nbd.sock 00:07:53.616 07:23:02 -- common/autotest_common.sh@829 -- # '[' -z 60477 ']' 00:07:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.616 07:23:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.616 07:23:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.616 07:23:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.616 07:23:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.616 07:23:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 07:23:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:53.616 [2024-11-19 07:23:02.807730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.616 [2024-11-19 07:23:02.807836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.874 [2024-11-19 07:23:02.956068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.132 [2024-11-19 07:23:03.136248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.066 07:23:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.066 07:23:04 -- common/autotest_common.sh@862 -- # return 0 00:07:55.066 07:23:04 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@24 -- # local i 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.066 07:23:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:55.325 07:23:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:55.325 07:23:04 -- common/autotest_common.sh@867 -- # local i 00:07:55.325 07:23:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.325 07:23:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.325 07:23:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:55.325 07:23:04 -- common/autotest_common.sh@871 -- # break 00:07:55.325 07:23:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.325 07:23:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.325 07:23:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.325 1+0 records in 00:07:55.325 1+0 records out 00:07:55.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983238 s, 4.2 MB/s 00:07:55.325 07:23:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.325 07:23:04 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.325 07:23:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.325 07:23:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.325 07:23:04 -- common/autotest_common.sh@887 -- # return 0 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.325 07:23:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:55.583 07:23:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:55.583 07:23:04 -- common/autotest_common.sh@867 -- # local i 00:07:55.583 07:23:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.583 07:23:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.583 07:23:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:55.583 07:23:04 -- common/autotest_common.sh@871 -- # break 00:07:55.583 07:23:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.583 07:23:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.583 07:23:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.583 1+0 records in 00:07:55.583 1+0 records out 00:07:55.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743089 s, 5.5 MB/s 00:07:55.583 07:23:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.583 07:23:04 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.583 07:23:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.583 07:23:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.583 07:23:04 -- common/autotest_common.sh@887 -- # return 0 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.583 07:23:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:55.841 07:23:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:55.841 07:23:04 -- common/autotest_common.sh@867 -- # local i 00:07:55.841 07:23:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.841 07:23:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.841 07:23:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:55.841 07:23:04 -- common/autotest_common.sh@871 -- # break 00:07:55.841 07:23:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.841 07:23:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.841 07:23:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.841 1+0 records in 00:07:55.841 1+0 records out 00:07:55.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000989306 s, 4.1 MB/s 00:07:55.841 07:23:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.841 07:23:04 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.841 07:23:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.841 07:23:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.841 07:23:04 -- common/autotest_common.sh@887 -- # return 0 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.841 07:23:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:56.100 07:23:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:56.100 07:23:05 -- common/autotest_common.sh@867 -- # local i 00:07:56.100 07:23:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.100 07:23:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.100 07:23:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:56.100 07:23:05 -- common/autotest_common.sh@871 -- # break 00:07:56.100 07:23:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.100 07:23:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.100 07:23:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.100 1+0 records in 00:07:56.100 1+0 records out 00:07:56.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000899568 s, 4.6 MB/s 00:07:56.100 07:23:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.100 07:23:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.100 07:23:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.100 07:23:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.100 07:23:05 -- common/autotest_common.sh@887 -- # return 0 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.100 07:23:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:56.359 07:23:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:56.359 07:23:05 -- common/autotest_common.sh@867 -- # local i 00:07:56.359 07:23:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.359 07:23:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.359 07:23:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:56.359 07:23:05 -- common/autotest_common.sh@871 -- # break 00:07:56.359 07:23:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.359 07:23:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.359 07:23:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.359 1+0 records in 00:07:56.359 1+0 records out 00:07:56.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114657 s, 3.6 MB/s 00:07:56.359 07:23:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.359 07:23:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.359 07:23:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.359 07:23:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.359 07:23:05 -- common/autotest_common.sh@887 -- # return 0 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.359 07:23:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:56.617 07:23:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:56.617 07:23:05 -- common/autotest_common.sh@867 -- # local i 00:07:56.617 07:23:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.617 07:23:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.617 07:23:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:56.617 07:23:05 -- common/autotest_common.sh@871 -- # break 00:07:56.617 07:23:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.617 07:23:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.617 07:23:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.617 1+0 records in 00:07:56.617 1+0 records out 00:07:56.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127414 s, 3.2 MB/s 00:07:56.617 07:23:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.617 07:23:05 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.617 07:23:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.617 07:23:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.617 07:23:05 -- common/autotest_common.sh@887 -- # return 0 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd0", 00:07:56.617 "bdev_name": "Nvme0n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd1", 00:07:56.617 "bdev_name": "Nvme1n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd2", 00:07:56.617 "bdev_name": "Nvme2n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd3", 00:07:56.617 "bdev_name": "Nvme2n2" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd4", 00:07:56.617 "bdev_name": "Nvme2n3" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd5", 00:07:56.617 "bdev_name": "Nvme3n1" 00:07:56.617 } 00:07:56.617 ]' 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:56.617 07:23:05 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd0", 00:07:56.617 "bdev_name": "Nvme0n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd1", 00:07:56.617 "bdev_name": "Nvme1n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd2", 00:07:56.617 "bdev_name": "Nvme2n1" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd3", 00:07:56.617 "bdev_name": "Nvme2n2" 00:07:56.617 }, 00:07:56.617 { 00:07:56.617 "nbd_device": "/dev/nbd4", 00:07:56.617 "bdev_name": "Nvme2n3" 00:07:56.618 }, 00:07:56.618 { 00:07:56.618 "nbd_device": "/dev/nbd5", 00:07:56.618 "bdev_name": "Nvme3n1" 00:07:56.618 } 00:07:56.618 ]' 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@51 -- # local i 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.618 07:23:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@41 -- # break 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.875 07:23:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@41 -- # break 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.133 07:23:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@41 -- # break 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@41 -- # break 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.390 07:23:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@41 -- # break 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.648 07:23:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@41 -- # break 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.905 07:23:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@65 -- # true 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@65 -- # count=0 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@122 -- # count=0 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@127 -- # return 0 00:07:58.165 07:23:07 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@12 -- # local i 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.165 07:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:58.425 /dev/nbd0 00:07:58.425 07:23:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.425 07:23:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.425 07:23:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:58.425 07:23:07 -- common/autotest_common.sh@867 -- # local i 00:07:58.425 07:23:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.425 07:23:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.425 07:23:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:58.425 07:23:07 -- common/autotest_common.sh@871 -- # break 00:07:58.425 07:23:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.425 07:23:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.425 07:23:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.425 1+0 records in 00:07:58.425 1+0 records out 00:07:58.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907729 s, 4.5 MB/s 00:07:58.425 07:23:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.425 07:23:07 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.425 07:23:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.425 07:23:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.425 07:23:07 -- common/autotest_common.sh@887 -- # return 0 00:07:58.425 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.425 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.425 07:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:58.425 /dev/nbd1 00:07:58.686 07:23:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:58.686 07:23:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:58.686 07:23:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:58.686 07:23:07 -- common/autotest_common.sh@867 -- # local i 00:07:58.686 07:23:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.686 07:23:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.686 07:23:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:58.686 07:23:07 -- common/autotest_common.sh@871 -- # break 00:07:58.687 07:23:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.687 1+0 records in 00:07:58.687 1+0 records out 00:07:58.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531671 s, 7.7 MB/s 00:07:58.687 07:23:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.687 07:23:07 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.687 07:23:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.687 07:23:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.687 07:23:07 -- common/autotest_common.sh@887 -- # return 0 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:58.687 /dev/nbd10 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:58.687 07:23:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:58.687 07:23:07 -- common/autotest_common.sh@867 -- # local i 00:07:58.687 07:23:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:58.687 07:23:07 -- common/autotest_common.sh@871 -- # break 00:07:58.687 07:23:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.687 07:23:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.687 1+0 records in 00:07:58.687 1+0 records out 00:07:58.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362659 s, 11.3 MB/s 00:07:58.687 07:23:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.687 07:23:07 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.687 07:23:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.687 07:23:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.687 07:23:07 -- common/autotest_common.sh@887 -- # return 0 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.687 07:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:58.948 /dev/nbd11 00:07:58.948 07:23:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:58.948 07:23:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:58.948 07:23:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:58.948 07:23:08 -- common/autotest_common.sh@867 -- # local i 00:07:58.948 07:23:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.948 07:23:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.948 07:23:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:58.948 07:23:08 -- common/autotest_common.sh@871 -- # break 00:07:58.948 07:23:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.948 07:23:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.948 07:23:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.948 1+0 records in 00:07:58.948 1+0 records out 00:07:58.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546464 s, 7.5 MB/s 00:07:58.948 07:23:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.948 07:23:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.948 07:23:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.948 07:23:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.948 07:23:08 -- common/autotest_common.sh@887 -- # return 0 00:07:58.948 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.948 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.948 07:23:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:59.209 /dev/nbd12 00:07:59.209 07:23:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:59.209 07:23:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:59.209 07:23:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:59.209 07:23:08 -- common/autotest_common.sh@867 -- # local i 00:07:59.209 07:23:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.209 07:23:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.209 07:23:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:59.209 07:23:08 -- common/autotest_common.sh@871 -- # break 00:07:59.209 07:23:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.209 07:23:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.209 07:23:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.209 1+0 records in 00:07:59.209 1+0 records out 00:07:59.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520308 s, 7.9 MB/s 00:07:59.209 07:23:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.209 07:23:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:59.209 07:23:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.209 07:23:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.209 07:23:08 -- common/autotest_common.sh@887 -- # return 0 00:07:59.209 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.210 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.210 07:23:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:59.469 /dev/nbd13 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:59.469 07:23:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:59.469 07:23:08 -- common/autotest_common.sh@867 -- # local i 00:07:59.469 07:23:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.469 07:23:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.469 07:23:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:59.469 07:23:08 -- common/autotest_common.sh@871 -- # break 00:07:59.469 07:23:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.469 07:23:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.469 07:23:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.469 1+0 records in 00:07:59.469 1+0 records out 00:07:59.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520201 s, 7.9 MB/s 00:07:59.469 07:23:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.469 07:23:08 -- common/autotest_common.sh@884 -- # size=4096 00:07:59.469 07:23:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.469 07:23:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.469 07:23:08 -- common/autotest_common.sh@887 -- # return 0 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.469 07:23:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.727 07:23:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd0", 00:07:59.727 "bdev_name": "Nvme0n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd1", 00:07:59.727 "bdev_name": "Nvme1n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd10", 00:07:59.727 "bdev_name": "Nvme2n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd11", 00:07:59.727 "bdev_name": "Nvme2n2" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd12", 00:07:59.727 "bdev_name": "Nvme2n3" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd13", 00:07:59.727 "bdev_name": "Nvme3n1" 00:07:59.727 } 00:07:59.727 ]' 00:07:59.727 07:23:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd0", 00:07:59.727 "bdev_name": "Nvme0n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd1", 00:07:59.727 "bdev_name": "Nvme1n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd10", 00:07:59.727 "bdev_name": "Nvme2n1" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd11", 00:07:59.727 "bdev_name": "Nvme2n2" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd12", 00:07:59.727 "bdev_name": "Nvme2n3" 00:07:59.727 }, 00:07:59.727 { 00:07:59.727 "nbd_device": "/dev/nbd13", 00:07:59.728 "bdev_name": "Nvme3n1" 00:07:59.728 } 00:07:59.728 ]' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:59.728 /dev/nbd1 00:07:59.728 /dev/nbd10 00:07:59.728 /dev/nbd11 00:07:59.728 /dev/nbd12 00:07:59.728 /dev/nbd13' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:59.728 /dev/nbd1 00:07:59.728 /dev/nbd10 00:07:59.728 /dev/nbd11 00:07:59.728 /dev/nbd12 00:07:59.728 /dev/nbd13' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@65 -- # count=6 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@95 -- # count=6 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:59.728 256+0 records in 00:07:59.728 256+0 records out 00:07:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495508 s, 212 MB/s 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:59.728 256+0 records in 00:07:59.728 256+0 records out 00:07:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0511159 s, 20.5 MB/s 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:59.728 256+0 records in 00:07:59.728 256+0 records out 00:07:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0516646 s, 20.3 MB/s 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:59.728 256+0 records in 00:07:59.728 256+0 records out 00:07:59.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0530325 s, 19.8 MB/s 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.728 07:23:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:59.986 256+0 records in 00:07:59.986 256+0 records out 00:07:59.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0497116 s, 21.1 MB/s 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:59.986 256+0 records in 00:07:59.986 256+0 records out 00:07:59.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0537741 s, 19.5 MB/s 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:59.986 256+0 records in 00:07:59.986 256+0 records out 00:07:59.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0558187 s, 18.8 MB/s 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:59.986 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@51 -- # local i 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.987 07:23:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@41 -- # break 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.244 07:23:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.502 07:23:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@41 -- # break 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.503 07:23:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@41 -- # break 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.761 07:23:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@41 -- # break 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@41 -- # break 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.020 07:23:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@41 -- # break 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.278 07:23:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@65 -- # true 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@104 -- # count=0 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@109 -- # return 0 00:08:01.535 07:23:10 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:01.535 07:23:10 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:01.793 malloc_lvol_verify 00:08:01.793 07:23:10 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:02.051 6762163e-04fe-4067-83e5-943013990902 00:08:02.051 07:23:11 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:02.051 1d3ba01b-3d97-4c01-8bb1-37e40a73791c 00:08:02.051 07:23:11 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:02.321 /dev/nbd0 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:02.321 mke2fs 1.47.0 (5-Feb-2023) 00:08:02.321 Discarding device blocks: 0/4096 done 00:08:02.321 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:02.321 00:08:02.321 Allocating group tables: 0/1 done 00:08:02.321 Writing inode tables: 0/1 done 00:08:02.321 Creating journal (1024 blocks): done 00:08:02.321 Writing superblocks and filesystem accounting information: 0/1 done 00:08:02.321 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@51 -- # local i 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.321 07:23:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@41 -- # break 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:02.595 07:23:11 -- bdev/nbd_common.sh@147 -- # return 0 00:08:02.595 07:23:11 -- bdev/blockdev.sh@324 -- # killprocess 60477 00:08:02.595 07:23:11 -- common/autotest_common.sh@936 -- # '[' -z 60477 ']' 00:08:02.595 07:23:11 -- common/autotest_common.sh@940 -- # kill -0 60477 00:08:02.595 07:23:11 -- common/autotest_common.sh@941 -- # uname 00:08:02.595 07:23:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.595 07:23:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60477 00:08:02.595 07:23:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:02.595 07:23:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:02.595 killing process with pid 60477 00:08:02.595 07:23:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60477' 00:08:02.595 07:23:11 -- common/autotest_common.sh@955 -- # kill 60477 00:08:02.595 07:23:11 -- common/autotest_common.sh@960 -- # wait 60477 00:08:03.165 ************************************ 00:08:03.165 END TEST bdev_nbd 00:08:03.165 ************************************ 00:08:03.165 07:23:12 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:03.165 00:08:03.165 real 0m9.602s 00:08:03.165 user 0m13.661s 00:08:03.165 sys 0m2.859s 00:08:03.166 07:23:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.166 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.166 07:23:12 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:03.166 07:23:12 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:03.166 skipping fio tests on NVMe due to multi-ns failures. 00:08:03.166 07:23:12 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:03.166 07:23:12 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:03.166 07:23:12 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:03.166 07:23:12 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:03.166 07:23:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.166 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.166 ************************************ 00:08:03.166 START TEST bdev_verify 00:08:03.166 ************************************ 00:08:03.166 07:23:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:03.426 [2024-11-19 07:23:12.444503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.426 [2024-11-19 07:23:12.444605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:08:03.426 [2024-11-19 07:23:12.593097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.688 [2024-11-19 07:23:12.774464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.688 [2024-11-19 07:23:12.774621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.258 Running I/O for 5 seconds... 00:08:09.542 00:08:09.542 Latency(us) 00:08:09.542 [2024-11-19T07:23:18.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0xbd0bd 00:08:09.542 Nvme0n1 : 5.03 3684.70 14.39 0.00 0.00 34662.97 4310.25 42144.69 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:09.542 Nvme0n1 : 5.04 3590.77 14.03 0.00 0.00 35443.87 5494.94 35893.56 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0xa0000 00:08:09.542 Nvme1n1 : 5.04 3682.53 14.38 0.00 0.00 34651.41 6402.36 39523.25 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0xa0000 length 0xa0000 00:08:09.542 Nvme1n1 : 5.05 3597.04 14.05 0.00 0.00 35377.09 2205.54 34885.32 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0x80000 00:08:09.542 Nvme2n1 : 5.04 3681.61 14.38 0.00 0.00 34614.15 6906.49 36700.16 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x80000 length 0x80000 00:08:09.542 Nvme2n1 : 5.05 3596.20 14.05 0.00 0.00 35350.39 2772.68 35288.62 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0x80000 00:08:09.542 Nvme2n2 : 5.04 3680.72 14.38 0.00 0.00 34595.70 7158.55 35893.56 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x80000 length 0x80000 00:08:09.542 Nvme2n2 : 5.05 3595.37 14.04 0.00 0.00 35333.13 3352.42 35288.62 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0x80000 00:08:09.542 Nvme2n3 : 5.04 3679.82 14.37 0.00 0.00 34575.72 7713.08 35490.26 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x80000 length 0x80000 00:08:09.542 Nvme2n3 : 5.03 3589.70 14.02 0.00 0.00 35560.08 6377.16 46984.27 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x0 length 0x20000 00:08:09.542 Nvme3n1 : 5.04 3685.08 14.39 0.00 0.00 34515.93 269.39 35893.56 00:08:09.542 [2024-11-19T07:23:18.792Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:09.542 Verification LBA range: start 0x20000 length 0x20000 00:08:09.542 Nvme3n1 : 5.04 3593.21 14.04 0.00 0.00 35474.74 2810.49 40128.20 00:08:09.542 [2024-11-19T07:23:18.792Z] =================================================================================================================== 00:08:09.542 [2024-11-19T07:23:18.792Z] Total : 43656.75 170.53 0.00 0.00 35008.06 269.39 46984.27 00:08:41.651 00:08:41.651 real 0m34.297s 00:08:41.651 user 0m41.966s 00:08:41.651 sys 0m0.608s 00:08:41.651 07:23:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.651 ************************************ 00:08:41.651 END TEST bdev_verify 00:08:41.651 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.651 ************************************ 00:08:41.651 07:23:46 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.651 07:23:46 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:41.651 07:23:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.651 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.651 ************************************ 00:08:41.651 START TEST bdev_verify_big_io 00:08:41.651 ************************************ 00:08:41.651 07:23:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.651 [2024-11-19 07:23:46.784023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.651 [2024-11-19 07:23:46.784126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61211 ] 00:08:41.651 [2024-11-19 07:23:46.934134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.651 [2024-11-19 07:23:47.116740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.651 [2024-11-19 07:23:47.116775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.651 Running I/O for 5 seconds... 00:08:44.191 00:08:44.191 Latency(us) 00:08:44.191 [2024-11-19T07:23:53.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.191 [2024-11-19T07:23:53.441Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.191 Verification LBA range: start 0x0 length 0xbd0b 00:08:44.191 Nvme0n1 : 5.35 309.91 19.37 0.00 0.00 403532.75 45371.08 677541.42 00:08:44.191 [2024-11-19T07:23:53.441Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:44.192 Nvme0n1 : 5.36 309.66 19.35 0.00 0.00 405489.93 6351.95 561391.46 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x0 length 0xa000 00:08:44.192 Nvme1n1 : 5.36 318.21 19.89 0.00 0.00 391736.81 9023.80 619466.44 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0xa000 length 0xa000 00:08:44.192 Nvme1n1 : 5.37 318.15 19.88 0.00 0.00 394509.46 5620.97 516222.03 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x0 length 0x8000 00:08:44.192 Nvme2n1 : 5.37 318.12 19.88 0.00 0.00 386176.13 9628.75 561391.46 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x8000 length 0x8000 00:08:44.192 Nvme2n1 : 5.37 317.98 19.87 0.00 0.00 390141.48 7410.61 467826.22 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x0 length 0x8000 00:08:44.192 Nvme2n2 : 5.37 317.97 19.87 0.00 0.00 380645.63 11191.53 503316.48 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x8000 length 0x8000 00:08:44.192 Nvme2n2 : 5.37 317.76 19.86 0.00 0.00 385665.69 9981.64 422656.79 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x0 length 0x8000 00:08:44.192 Nvme2n3 : 5.38 324.86 20.30 0.00 0.00 368029.46 8418.86 445241.50 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x8000 length 0x8000 00:08:44.192 Nvme2n3 : 5.37 317.65 19.85 0.00 0.00 381204.57 10838.65 374260.97 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x0 length 0x2000 00:08:44.192 Nvme3n1 : 5.40 346.47 21.65 0.00 0.00 340668.15 3528.86 519448.42 00:08:44.192 [2024-11-19T07:23:53.442Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.192 Verification LBA range: start 0x2000 length 0x2000 00:08:44.192 Nvme3n1 : 5.38 317.55 19.85 0.00 0.00 376763.06 11494.01 364581.81 00:08:44.192 [2024-11-19T07:23:53.442Z] =================================================================================================================== 00:08:44.192 [2024-11-19T07:23:53.442Z] Total : 3834.29 239.64 0.00 0.00 383240.54 3528.86 677541.42 00:08:46.094 00:08:46.094 real 0m8.335s 00:08:46.094 user 0m15.289s 00:08:46.094 sys 0m0.247s 00:08:46.094 07:23:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.094 07:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:46.094 ************************************ 00:08:46.094 END TEST bdev_verify_big_io 00:08:46.094 ************************************ 00:08:46.094 07:23:55 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.094 07:23:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:46.094 07:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.094 07:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:46.094 ************************************ 00:08:46.094 START TEST bdev_write_zeroes 00:08:46.094 ************************************ 00:08:46.094 07:23:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.094 [2024-11-19 07:23:55.153568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.094 [2024-11-19 07:23:55.153675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61315 ] 00:08:46.094 [2024-11-19 07:23:55.300022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.352 [2024-11-19 07:23:55.445147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.918 Running I/O for 1 seconds... 00:08:47.858 00:08:47.858 Latency(us) 00:08:47.858 [2024-11-19T07:23:57.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme0n1 : 1.01 12337.42 48.19 0.00 0.00 10353.57 7511.43 20164.92 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme1n1 : 1.01 12323.00 48.14 0.00 0.00 10352.76 7713.08 19559.98 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme2n1 : 1.01 12309.06 48.08 0.00 0.00 10345.90 7461.02 19459.15 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme2n2 : 1.02 12295.17 48.03 0.00 0.00 10322.81 6427.57 19862.45 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme2n3 : 1.02 12281.00 47.97 0.00 0.00 10320.12 6553.60 19761.62 00:08:47.858 [2024-11-19T07:23:57.108Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.858 Nvme3n1 : 1.02 12267.04 47.92 0.00 0.00 10311.36 7158.55 20164.92 00:08:47.858 [2024-11-19T07:23:57.108Z] =================================================================================================================== 00:08:47.858 [2024-11-19T07:23:57.108Z] Total : 73812.69 288.33 0.00 0.00 10334.42 6427.57 20164.92 00:08:48.802 00:08:48.802 real 0m2.690s 00:08:48.802 user 0m2.420s 00:08:48.802 sys 0m0.156s 00:08:48.802 07:23:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.802 07:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:48.802 ************************************ 00:08:48.802 END TEST bdev_write_zeroes 00:08:48.802 ************************************ 00:08:48.802 07:23:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.802 07:23:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:48.802 07:23:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.802 07:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:48.802 ************************************ 00:08:48.802 START TEST bdev_json_nonenclosed 00:08:48.802 ************************************ 00:08:48.802 07:23:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.802 [2024-11-19 07:23:57.896417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.802 [2024-11-19 07:23:57.896533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61368 ] 00:08:48.802 [2024-11-19 07:23:58.048854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.063 [2024-11-19 07:23:58.219783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.063 [2024-11-19 07:23:58.219932] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:49.063 [2024-11-19 07:23:58.219950] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.324 00:08:49.324 real 0m0.668s 00:08:49.324 user 0m0.470s 00:08:49.324 sys 0m0.093s 00:08:49.324 07:23:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.324 ************************************ 00:08:49.324 END TEST bdev_json_nonenclosed 00:08:49.324 ************************************ 00:08:49.324 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 07:23:58 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.324 07:23:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:49.324 07:23:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.324 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 ************************************ 00:08:49.324 START TEST bdev_json_nonarray 00:08:49.324 ************************************ 00:08:49.324 07:23:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.585 [2024-11-19 07:23:58.605961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.585 [2024-11-19 07:23:58.606070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61394 ] 00:08:49.585 [2024-11-19 07:23:58.757389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.846 [2024-11-19 07:23:58.954743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.846 [2024-11-19 07:23:58.954915] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:49.846 [2024-11-19 07:23:58.954938] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.106 00:08:50.106 real 0m0.690s 00:08:50.106 user 0m0.496s 00:08:50.106 sys 0m0.089s 00:08:50.106 07:23:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.106 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.106 ************************************ 00:08:50.106 END TEST bdev_json_nonarray 00:08:50.106 ************************************ 00:08:50.106 07:23:59 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:50.106 07:23:59 -- bdev/blockdev.sh@809 -- # cleanup 00:08:50.106 07:23:59 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:50.106 07:23:59 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.106 07:23:59 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:50.106 07:23:59 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:50.106 00:08:50.106 real 1m4.120s 00:08:50.106 user 1m25.378s 00:08:50.106 sys 0m5.193s 00:08:50.106 07:23:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.106 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.106 ************************************ 00:08:50.106 END TEST blockdev_nvme 00:08:50.106 ************************************ 00:08:50.106 07:23:59 -- spdk/autotest.sh@206 -- # uname -s 00:08:50.106 07:23:59 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:50.106 07:23:59 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:50.106 07:23:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.106 07:23:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.106 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.106 ************************************ 00:08:50.106 START TEST blockdev_nvme_gpt 00:08:50.106 ************************************ 00:08:50.106 07:23:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:50.365 * Looking for test storage... 00:08:50.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:50.365 07:23:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.365 07:23:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.365 07:23:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.365 07:23:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.365 07:23:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.365 07:23:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.365 07:23:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.365 07:23:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.365 07:23:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.365 07:23:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.365 07:23:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.365 07:23:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.365 07:23:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.365 07:23:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.365 07:23:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.365 07:23:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.365 07:23:59 -- scripts/common.sh@344 -- # : 1 00:08:50.365 07:23:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.365 07:23:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.365 07:23:59 -- scripts/common.sh@364 -- # decimal 1 00:08:50.365 07:23:59 -- scripts/common.sh@352 -- # local d=1 00:08:50.365 07:23:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.365 07:23:59 -- scripts/common.sh@354 -- # echo 1 00:08:50.365 07:23:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.365 07:23:59 -- scripts/common.sh@365 -- # decimal 2 00:08:50.365 07:23:59 -- scripts/common.sh@352 -- # local d=2 00:08:50.365 07:23:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.365 07:23:59 -- scripts/common.sh@354 -- # echo 2 00:08:50.365 07:23:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.365 07:23:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.365 07:23:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.365 07:23:59 -- scripts/common.sh@367 -- # return 0 00:08:50.365 07:23:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.365 07:23:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.365 --rc genhtml_branch_coverage=1 00:08:50.365 --rc genhtml_function_coverage=1 00:08:50.365 --rc genhtml_legend=1 00:08:50.365 --rc geninfo_all_blocks=1 00:08:50.365 --rc geninfo_unexecuted_blocks=1 00:08:50.365 00:08:50.365 ' 00:08:50.365 07:23:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.365 --rc genhtml_branch_coverage=1 00:08:50.365 --rc genhtml_function_coverage=1 00:08:50.365 --rc genhtml_legend=1 00:08:50.365 --rc geninfo_all_blocks=1 00:08:50.365 --rc geninfo_unexecuted_blocks=1 00:08:50.365 00:08:50.365 ' 00:08:50.365 07:23:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.365 --rc genhtml_branch_coverage=1 00:08:50.365 --rc genhtml_function_coverage=1 00:08:50.365 --rc genhtml_legend=1 00:08:50.365 --rc geninfo_all_blocks=1 00:08:50.365 --rc geninfo_unexecuted_blocks=1 00:08:50.365 00:08:50.365 ' 00:08:50.365 07:23:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.365 --rc genhtml_branch_coverage=1 00:08:50.365 --rc genhtml_function_coverage=1 00:08:50.365 --rc genhtml_legend=1 00:08:50.365 --rc geninfo_all_blocks=1 00:08:50.365 --rc geninfo_unexecuted_blocks=1 00:08:50.365 00:08:50.365 ' 00:08:50.365 07:23:59 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:50.365 07:23:59 -- bdev/nbd_common.sh@6 -- # set -e 00:08:50.365 07:23:59 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:50.365 07:23:59 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.365 07:23:59 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:50.365 07:23:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:50.365 07:23:59 -- bdev/blockdev.sh@18 -- # : 00:08:50.365 07:23:59 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:50.365 07:23:59 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:50.365 07:23:59 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:50.365 07:23:59 -- bdev/blockdev.sh@672 -- # uname -s 00:08:50.365 07:23:59 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:50.365 07:23:59 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:50.365 07:23:59 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:50.365 07:23:59 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:50.365 07:23:59 -- bdev/blockdev.sh@682 -- # dek= 00:08:50.365 07:23:59 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:50.365 07:23:59 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:50.365 07:23:59 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:50.365 07:23:59 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:50.365 07:23:59 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:50.365 07:23:59 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:50.365 07:23:59 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61471 00:08:50.365 07:23:59 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.365 07:23:59 -- bdev/blockdev.sh@47 -- # waitforlisten 61471 00:08:50.365 07:23:59 -- common/autotest_common.sh@829 -- # '[' -z 61471 ']' 00:08:50.365 07:23:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.365 07:23:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.365 07:23:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.365 07:23:59 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:50.365 07:23:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.365 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.365 [2024-11-19 07:23:59.538013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.365 [2024-11-19 07:23:59.538129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:08:50.626 [2024-11-19 07:23:59.685793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.626 [2024-11-19 07:23:59.870067] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.626 [2024-11-19 07:23:59.870316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.008 07:24:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.008 07:24:01 -- common/autotest_common.sh@862 -- # return 0 00:08:52.008 07:24:01 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:52.008 07:24:01 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:52.008 07:24:01 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:52.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.268 Waiting for block devices as requested 00:08:52.269 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.530 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.530 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.530 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.843 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:57.843 07:24:06 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:57.843 07:24:06 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:57.843 07:24:06 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:57.843 07:24:06 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:57.843 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:57.843 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:57.843 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:57.843 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:57.843 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.843 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.843 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:57.844 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:57.844 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:57.844 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.844 07:24:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:57.844 07:24:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:57.844 07:24:06 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:57.844 07:24:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:57.844 07:24:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:57.844 07:24:06 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:57.844 07:24:06 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:57.844 07:24:06 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:57.844 07:24:06 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:57.844 07:24:06 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:57.844 07:24:06 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:57.844 07:24:06 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:57.844 07:24:06 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:57.844 BYT; 00:08:57.844 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:57.844 07:24:06 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:57.844 BYT; 00:08:57.844 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:57.844 07:24:06 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:57.844 07:24:06 -- bdev/blockdev.sh@114 -- # break 00:08:57.844 07:24:06 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:57.844 07:24:06 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:57.844 07:24:06 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:57.844 07:24:06 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:57.844 07:24:06 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:57.844 07:24:06 -- scripts/common.sh@410 -- # local spdk_guid 00:08:57.844 07:24:06 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:57.844 07:24:06 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:57.844 07:24:06 -- scripts/common.sh@415 -- # IFS='()' 00:08:57.844 07:24:06 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:57.844 07:24:06 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:57.844 07:24:06 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:57.844 07:24:06 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:57.844 07:24:06 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:57.844 07:24:06 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:57.844 07:24:06 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:57.844 07:24:06 -- scripts/common.sh@422 -- # local spdk_guid 00:08:57.844 07:24:06 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:57.844 07:24:06 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:57.844 07:24:06 -- scripts/common.sh@427 -- # IFS='()' 00:08:57.844 07:24:06 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:57.844 07:24:06 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:57.844 07:24:06 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:57.844 07:24:06 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:57.844 07:24:06 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:57.844 07:24:06 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:57.844 07:24:06 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:58.787 The operation has completed successfully. 00:08:58.787 07:24:07 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:59.730 The operation has completed successfully. 00:08:59.730 07:24:08 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.673 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.673 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.673 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.673 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.673 07:24:09 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:00.673 07:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.673 07:24:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.673 [] 00:09:00.673 07:24:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.673 07:24:09 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:00.673 07:24:09 -- bdev/blockdev.sh@79 -- # local json 00:09:00.673 07:24:09 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:00.673 07:24:09 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.673 07:24:09 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:00.673 07:24:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.673 07:24:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.933 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.933 07:24:10 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:00.933 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.933 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:00.933 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.933 07:24:10 -- bdev/blockdev.sh@738 -- # cat 00:09:00.933 07:24:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:00.933 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.933 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.194 07:24:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:01.194 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.194 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.194 07:24:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:01.194 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.194 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.194 07:24:10 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:01.194 07:24:10 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:01.194 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.194 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 07:24:10 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:01.194 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.194 07:24:10 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:01.194 07:24:10 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:01.195 07:24:10 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "dbcd9a4d-d576-40c9-8cf0-cdbfc01119e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dbcd9a4d-d576-40c9-8cf0-cdbfc01119e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0a73fe42-8388-4fea-b3cd-9eb098e257e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0a73fe42-8388-4fea-b3cd-9eb098e257e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fe171006-33ae-4680-b94d-2ecdb3615150"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe171006-33ae-4680-b94d-2ecdb3615150",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c8df5608-b913-49d9-91b3-96ba3beffc0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8df5608-b913-49d9-91b3-96ba3beffc0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "46f707b2-e9e6-45c5-8c18-5cdb12b0f657"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "46f707b2-e9e6-45c5-8c18-5cdb12b0f657",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:01.195 07:24:10 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:01.195 07:24:10 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:01.195 07:24:10 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:01.195 07:24:10 -- bdev/blockdev.sh@752 -- # killprocess 61471 00:09:01.195 07:24:10 -- common/autotest_common.sh@936 -- # '[' -z 61471 ']' 00:09:01.195 07:24:10 -- common/autotest_common.sh@940 -- # kill -0 61471 00:09:01.195 07:24:10 -- common/autotest_common.sh@941 -- # uname 00:09:01.195 07:24:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.195 07:24:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61471 00:09:01.195 killing process with pid 61471 00:09:01.195 07:24:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:01.195 07:24:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:01.195 07:24:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61471' 00:09:01.195 07:24:10 -- common/autotest_common.sh@955 -- # kill 61471 00:09:01.195 07:24:10 -- common/autotest_common.sh@960 -- # wait 61471 00:09:02.580 07:24:11 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:02.580 07:24:11 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:02.580 07:24:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:02.580 07:24:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.580 07:24:11 -- common/autotest_common.sh@10 -- # set +x 00:09:02.580 ************************************ 00:09:02.580 START TEST bdev_hello_world 00:09:02.580 ************************************ 00:09:02.580 07:24:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:02.580 [2024-11-19 07:24:11.560327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.580 [2024-11-19 07:24:11.560442] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62124 ] 00:09:02.580 [2024-11-19 07:24:11.707418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.841 [2024-11-19 07:24:11.852225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.102 [2024-11-19 07:24:12.328487] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:03.102 [2024-11-19 07:24:12.328527] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:03.102 [2024-11-19 07:24:12.328543] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:03.102 [2024-11-19 07:24:12.330426] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:03.102 [2024-11-19 07:24:12.330842] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:03.102 [2024-11-19 07:24:12.330868] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:03.102 [2024-11-19 07:24:12.331085] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:03.102 00:09:03.102 [2024-11-19 07:24:12.331104] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:04.042 00:09:04.042 real 0m1.451s 00:09:04.042 user 0m1.174s 00:09:04.042 sys 0m0.171s 00:09:04.042 07:24:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.042 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 ************************************ 00:09:04.042 END TEST bdev_hello_world 00:09:04.042 ************************************ 00:09:04.042 07:24:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:04.042 07:24:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.042 07:24:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.042 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 ************************************ 00:09:04.042 START TEST bdev_bounds 00:09:04.042 ************************************ 00:09:04.042 07:24:13 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:09:04.042 07:24:13 -- bdev/blockdev.sh@288 -- # bdevio_pid=62161 00:09:04.042 Process bdevio pid: 62161 00:09:04.042 07:24:13 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.042 07:24:13 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62161' 00:09:04.042 07:24:13 -- bdev/blockdev.sh@291 -- # waitforlisten 62161 00:09:04.042 07:24:13 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:04.042 07:24:13 -- common/autotest_common.sh@829 -- # '[' -z 62161 ']' 00:09:04.042 07:24:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.042 07:24:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.042 07:24:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.042 07:24:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.042 07:24:13 -- common/autotest_common.sh@10 -- # set +x 00:09:04.042 [2024-11-19 07:24:13.064341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.042 [2024-11-19 07:24:13.064486] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:09:04.042 [2024-11-19 07:24:13.211948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.300 [2024-11-19 07:24:13.352348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.300 [2024-11-19 07:24:13.352565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.300 [2024-11-19 07:24:13.352684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.868 07:24:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.868 07:24:13 -- common/autotest_common.sh@862 -- # return 0 00:09:04.868 07:24:13 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:04.868 I/O targets: 00:09:04.868 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:04.868 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:04.868 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:04.868 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:04.868 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:04.868 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:04.868 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:04.868 00:09:04.868 00:09:04.868 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.868 http://cunit.sourceforge.net/ 00:09:04.868 00:09:04.868 00:09:04.868 Suite: bdevio tests on: Nvme3n1 00:09:04.868 Test: blockdev write read block ...passed 00:09:04.868 Test: blockdev write zeroes read block ...passed 00:09:04.868 Test: blockdev write zeroes read no split ...passed 00:09:04.868 Test: blockdev write zeroes read split ...passed 00:09:04.868 Test: blockdev write zeroes read split partial ...passed 00:09:04.868 Test: blockdev reset ...[2024-11-19 07:24:14.032612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:04.868 [2024-11-19 07:24:14.035633] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.868 passed 00:09:04.868 Test: blockdev write read 8 blocks ...passed 00:09:04.868 Test: blockdev write read size > 128k ...passed 00:09:04.868 Test: blockdev write read invalid size ...passed 00:09:04.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:04.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:04.868 Test: blockdev write read max offset ...passed 00:09:04.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.868 Test: blockdev writev readv 8 blocks ...passed 00:09:04.868 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.868 Test: blockdev writev readv block ...passed 00:09:04.868 Test: blockdev writev readv size > 128k ...passed 00:09:04.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.868 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.043501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ee0a000 len:0x1000 00:09:04.868 [2024-11-19 07:24:14.043547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:04.868 passed 00:09:04.868 Test: blockdev nvme passthru rw ...passed 00:09:04.868 Test: blockdev nvme passthru vendor specific ...passed 00:09:04.868 Test: blockdev nvme admin passthru ...[2024-11-19 07:24:14.044214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:04.868 [2024-11-19 07:24:14.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:04.868 passed 00:09:04.868 Test: blockdev copy ...passed 00:09:04.868 Suite: bdevio tests on: Nvme2n3 00:09:04.868 Test: blockdev write read block ...passed 00:09:04.868 Test: blockdev write zeroes read block ...passed 00:09:04.868 Test: blockdev write zeroes read no split ...passed 00:09:04.868 Test: blockdev write zeroes read split ...passed 00:09:04.868 Test: blockdev write zeroes read split partial ...passed 00:09:04.868 Test: blockdev reset ...[2024-11-19 07:24:14.101518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:04.868 [2024-11-19 07:24:14.104417] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.868 passed 00:09:04.868 Test: blockdev write read 8 blocks ...passed 00:09:04.868 Test: blockdev write read size > 128k ...passed 00:09:04.868 Test: blockdev write read invalid size ...passed 00:09:04.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:04.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:04.868 Test: blockdev write read max offset ...passed 00:09:04.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.868 Test: blockdev writev readv 8 blocks ...passed 00:09:04.868 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.868 Test: blockdev writev readv block ...passed 00:09:04.868 Test: blockdev writev readv size > 128k ...passed 00:09:04.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.868 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.111763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276d04000 len:0x1000 00:09:04.868 [2024-11-19 07:24:14.111805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:04.868 passed 00:09:04.868 Test: blockdev nvme passthru rw ...passed 00:09:04.868 Test: blockdev nvme passthru vendor specific ...passed 00:09:04.868 Test: blockdev nvme admin passthru ...[2024-11-19 07:24:14.112413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:04.868 [2024-11-19 07:24:14.112436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.127 passed 00:09:05.127 Test: blockdev copy ...passed 00:09:05.127 Suite: bdevio tests on: Nvme2n2 00:09:05.127 Test: blockdev write read block ...passed 00:09:05.127 Test: blockdev write zeroes read block ...passed 00:09:05.127 Test: blockdev write zeroes read no split ...passed 00:09:05.127 Test: blockdev write zeroes read split ...passed 00:09:05.127 Test: blockdev write zeroes read split partial ...passed 00:09:05.127 Test: blockdev reset ...[2024-11-19 07:24:14.171332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:05.127 [2024-11-19 07:24:14.174007] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.127 passed 00:09:05.127 Test: blockdev write read 8 blocks ...passed 00:09:05.127 Test: blockdev write read size > 128k ...passed 00:09:05.127 Test: blockdev write read invalid size ...passed 00:09:05.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.127 Test: blockdev write read max offset ...passed 00:09:05.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.127 Test: blockdev writev readv 8 blocks ...passed 00:09:05.127 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.127 Test: blockdev writev readv block ...passed 00:09:05.127 Test: blockdev writev readv size > 128k ...passed 00:09:05.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.127 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.180886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276d04000 len:0x1000 00:09:05.127 [2024-11-19 07:24:14.180935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.127 passed 00:09:05.127 Test: blockdev nvme passthru rw ...passed 00:09:05.127 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.127 Test: blockdev nvme admin passthru ...[2024-11-19 07:24:14.181660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.127 [2024-11-19 07:24:14.181687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.127 passed 00:09:05.127 Test: blockdev copy ...passed 00:09:05.127 Suite: bdevio tests on: Nvme2n1 00:09:05.127 Test: blockdev write read block ...passed 00:09:05.127 Test: blockdev write zeroes read block ...passed 00:09:05.127 Test: blockdev write zeroes read no split ...passed 00:09:05.127 Test: blockdev write zeroes read split ...passed 00:09:05.127 Test: blockdev write zeroes read split partial ...passed 00:09:05.127 Test: blockdev reset ...[2024-11-19 07:24:14.245479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:05.127 [2024-11-19 07:24:14.248692] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.127 passed 00:09:05.127 Test: blockdev write read 8 blocks ...passed 00:09:05.127 Test: blockdev write read size > 128k ...passed 00:09:05.127 Test: blockdev write read invalid size ...passed 00:09:05.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.127 Test: blockdev write read max offset ...passed 00:09:05.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.127 Test: blockdev writev readv 8 blocks ...passed 00:09:05.127 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.127 Test: blockdev writev readv block ...passed 00:09:05.127 Test: blockdev writev readv size > 128k ...passed 00:09:05.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.127 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.259349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28043c000 len:0x1000 00:09:05.127 [2024-11-19 07:24:14.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.127 passed 00:09:05.127 Test: blockdev nvme passthru rw ...passed 00:09:05.127 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:24:14.260719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.127 [2024-11-19 07:24:14.260747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.127 passed 00:09:05.127 Test: blockdev nvme admin passthru ...passed 00:09:05.127 Test: blockdev copy ...passed 00:09:05.127 Suite: bdevio tests on: Nvme1n1 00:09:05.127 Test: blockdev write read block ...passed 00:09:05.127 Test: blockdev write zeroes read block ...passed 00:09:05.127 Test: blockdev write zeroes read no split ...passed 00:09:05.127 Test: blockdev write zeroes read split ...passed 00:09:05.128 Test: blockdev write zeroes read split partial ...passed 00:09:05.128 Test: blockdev reset ...[2024-11-19 07:24:14.317209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:05.128 [2024-11-19 07:24:14.320850] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.128 passed 00:09:05.128 Test: blockdev write read 8 blocks ...passed 00:09:05.128 Test: blockdev write read size > 128k ...passed 00:09:05.128 Test: blockdev write read invalid size ...passed 00:09:05.128 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.128 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.128 Test: blockdev write read max offset ...passed 00:09:05.128 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.128 Test: blockdev writev readv 8 blocks ...passed 00:09:05.128 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.128 Test: blockdev writev readv block ...passed 00:09:05.128 Test: blockdev writev readv size > 128k ...passed 00:09:05.128 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.128 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.337735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280438000 len:0x1000 00:09:05.128 [2024-11-19 07:24:14.337775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.128 passed 00:09:05.128 Test: blockdev nvme passthru rw ...passed 00:09:05.128 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.128 Test: blockdev nvme admin passthru ...[2024-11-19 07:24:14.340062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.128 [2024-11-19 07:24:14.340094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.128 passed 00:09:05.128 Test: blockdev copy ...passed 00:09:05.128 Suite: bdevio tests on: Nvme0n1p2 00:09:05.128 Test: blockdev write read block ...passed 00:09:05.128 Test: blockdev write zeroes read block ...passed 00:09:05.128 Test: blockdev write zeroes read no split ...passed 00:09:05.386 Test: blockdev write zeroes read split ...passed 00:09:05.386 Test: blockdev write zeroes read split partial ...passed 00:09:05.386 Test: blockdev reset ...[2024-11-19 07:24:14.400878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:05.386 [2024-11-19 07:24:14.403369] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.386 passed 00:09:05.386 Test: blockdev write read 8 blocks ...passed 00:09:05.386 Test: blockdev write read size > 128k ...passed 00:09:05.386 Test: blockdev write read invalid size ...passed 00:09:05.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.386 Test: blockdev write read max offset ...passed 00:09:05.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.386 Test: blockdev writev readv 8 blocks ...passed 00:09:05.386 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.386 Test: blockdev writev readv block ...passed 00:09:05.386 Test: blockdev writev readv size > 128k ...passed 00:09:05.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.386 Test: blockdev comparev and writev ...[2024-11-19 07:24:14.409536] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:05.386 separate metadata which is not supported yet. 00:09:05.386 passed 00:09:05.386 Test: blockdev nvme passthru rw ...passed 00:09:05.386 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.386 Test: blockdev nvme admin passthru ...passed 00:09:05.386 Test: blockdev copy ...passed 00:09:05.386 Suite: bdevio tests on: Nvme0n1p1 00:09:05.386 Test: blockdev write read block ...passed 00:09:05.386 Test: blockdev write zeroes read block ...passed 00:09:05.386 Test: blockdev write zeroes read no split ...passed 00:09:05.386 Test: blockdev write zeroes read split ...passed 00:09:05.386 Test: blockdev write zeroes read split partial ...passed 00:09:05.386 Test: blockdev reset ...[2024-11-19 07:24:14.455581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:05.386 [2024-11-19 07:24:14.458728] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.386 passed 00:09:05.386 Test: blockdev write read 8 blocks ...passed 00:09:05.386 Test: blockdev write read size > 128k ...passed 00:09:05.386 Test: blockdev write read invalid size ...passed 00:09:05.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.386 Test: blockdev write read max offset ...passed 00:09:05.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.386 Test: blockdev writev readv 8 blocks ...passed 00:09:05.386 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.386 Test: blockdev writev readv block ...passed 00:09:05.386 Test: blockdev writev readv size > 128k ...passed 00:09:05.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.386 Test: blockdev comparev and writev ...passed 00:09:05.386 Test: blockdev nvme passthru rw ...passed 00:09:05.386 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.386 Test: blockdev nvme admin passthru ...passed 00:09:05.386 Test: blockdev copy ...[2024-11-19 07:24:14.470298] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:05.386 separate metadata which is not supported yet. 00:09:05.386 passed 00:09:05.386 00:09:05.386 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.386 suites 7 7 n/a 0 0 00:09:05.387 tests 161 161 161 0 0 00:09:05.387 asserts 1006 1006 1006 0 n/a 00:09:05.387 00:09:05.387 Elapsed time = 1.285 seconds 00:09:05.387 0 00:09:05.387 07:24:14 -- bdev/blockdev.sh@293 -- # killprocess 62161 00:09:05.387 07:24:14 -- common/autotest_common.sh@936 -- # '[' -z 62161 ']' 00:09:05.387 07:24:14 -- common/autotest_common.sh@940 -- # kill -0 62161 00:09:05.387 07:24:14 -- common/autotest_common.sh@941 -- # uname 00:09:05.387 07:24:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:05.387 07:24:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62161 00:09:05.387 07:24:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:05.387 killing process with pid 62161 00:09:05.387 07:24:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:05.387 07:24:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62161' 00:09:05.387 07:24:14 -- common/autotest_common.sh@955 -- # kill 62161 00:09:05.387 07:24:14 -- common/autotest_common.sh@960 -- # wait 62161 00:09:05.953 07:24:15 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:05.953 00:09:05.953 real 0m2.192s 00:09:05.953 user 0m5.402s 00:09:05.953 sys 0m0.262s 00:09:05.953 07:24:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.953 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:09:05.953 ************************************ 00:09:05.953 END TEST bdev_bounds 00:09:05.953 ************************************ 00:09:06.212 07:24:15 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:06.212 07:24:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:06.212 07:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.212 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:09:06.212 ************************************ 00:09:06.212 START TEST bdev_nbd 00:09:06.212 ************************************ 00:09:06.212 07:24:15 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:06.212 07:24:15 -- bdev/blockdev.sh@298 -- # uname -s 00:09:06.212 07:24:15 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:06.212 07:24:15 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.212 07:24:15 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:06.212 07:24:15 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:06.212 07:24:15 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:06.212 07:24:15 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:06.212 07:24:15 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:06.212 07:24:15 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:06.212 07:24:15 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:06.212 07:24:15 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:06.212 07:24:15 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:06.212 07:24:15 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:06.212 07:24:15 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:06.212 07:24:15 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:06.212 07:24:15 -- bdev/blockdev.sh@316 -- # nbd_pid=62216 00:09:06.212 07:24:15 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:06.212 07:24:15 -- bdev/blockdev.sh@318 -- # waitforlisten 62216 /var/tmp/spdk-nbd.sock 00:09:06.212 07:24:15 -- common/autotest_common.sh@829 -- # '[' -z 62216 ']' 00:09:06.212 07:24:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:06.212 07:24:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:06.212 07:24:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:06.212 07:24:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.212 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:09:06.212 07:24:15 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:06.212 [2024-11-19 07:24:15.303434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.212 [2024-11-19 07:24:15.303540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.212 [2024-11-19 07:24:15.444947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.470 [2024-11-19 07:24:15.614853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.882 07:24:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.882 07:24:16 -- common/autotest_common.sh@862 -- # return 0 00:09:07.882 07:24:16 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@24 -- # local i 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:07.882 07:24:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:07.882 07:24:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:07.882 07:24:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:07.882 07:24:17 -- common/autotest_common.sh@867 -- # local i 00:09:07.882 07:24:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:07.882 07:24:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:07.882 07:24:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:07.882 07:24:17 -- common/autotest_common.sh@871 -- # break 00:09:07.882 07:24:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:07.882 07:24:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:07.882 07:24:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.882 1+0 records in 00:09:07.882 1+0 records out 00:09:07.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306884 s, 13.3 MB/s 00:09:07.882 07:24:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.882 07:24:17 -- common/autotest_common.sh@884 -- # size=4096 00:09:07.882 07:24:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.882 07:24:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:07.882 07:24:17 -- common/autotest_common.sh@887 -- # return 0 00:09:07.882 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:07.882 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:07.882 07:24:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:08.141 07:24:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:08.141 07:24:17 -- common/autotest_common.sh@867 -- # local i 00:09:08.141 07:24:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.141 07:24:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.141 07:24:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:08.141 07:24:17 -- common/autotest_common.sh@871 -- # break 00:09:08.141 07:24:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.141 07:24:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.141 07:24:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.141 1+0 records in 00:09:08.141 1+0 records out 00:09:08.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401961 s, 10.2 MB/s 00:09:08.141 07:24:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.141 07:24:17 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.141 07:24:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.141 07:24:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.141 07:24:17 -- common/autotest_common.sh@887 -- # return 0 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.141 07:24:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:08.399 07:24:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:08.399 07:24:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:08.399 07:24:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:08.399 07:24:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:08.399 07:24:17 -- common/autotest_common.sh@867 -- # local i 00:09:08.399 07:24:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.399 07:24:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.399 07:24:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:08.399 07:24:17 -- common/autotest_common.sh@871 -- # break 00:09:08.399 07:24:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.399 07:24:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.400 07:24:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.400 1+0 records in 00:09:08.400 1+0 records out 00:09:08.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496521 s, 8.2 MB/s 00:09:08.400 07:24:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.400 07:24:17 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.400 07:24:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.400 07:24:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.400 07:24:17 -- common/autotest_common.sh@887 -- # return 0 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:08.400 07:24:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:08.400 07:24:17 -- common/autotest_common.sh@867 -- # local i 00:09:08.400 07:24:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.400 07:24:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.400 07:24:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:08.400 07:24:17 -- common/autotest_common.sh@871 -- # break 00:09:08.400 07:24:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.400 07:24:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.400 07:24:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.400 1+0 records in 00:09:08.400 1+0 records out 00:09:08.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467302 s, 8.8 MB/s 00:09:08.400 07:24:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.400 07:24:17 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.400 07:24:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.400 07:24:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.400 07:24:17 -- common/autotest_common.sh@887 -- # return 0 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.400 07:24:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:08.658 07:24:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:08.658 07:24:17 -- common/autotest_common.sh@867 -- # local i 00:09:08.658 07:24:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.658 07:24:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.658 07:24:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:08.658 07:24:17 -- common/autotest_common.sh@871 -- # break 00:09:08.658 07:24:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.658 07:24:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.658 07:24:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.658 1+0 records in 00:09:08.658 1+0 records out 00:09:08.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385009 s, 10.6 MB/s 00:09:08.658 07:24:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.658 07:24:17 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.658 07:24:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.658 07:24:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.658 07:24:17 -- common/autotest_common.sh@887 -- # return 0 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.658 07:24:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:08.916 07:24:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:08.916 07:24:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:08.916 07:24:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:08.916 07:24:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:08.916 07:24:18 -- common/autotest_common.sh@867 -- # local i 00:09:08.916 07:24:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.916 07:24:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.916 07:24:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:08.916 07:24:18 -- common/autotest_common.sh@871 -- # break 00:09:08.916 07:24:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.916 07:24:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.916 07:24:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.916 1+0 records in 00:09:08.916 1+0 records out 00:09:08.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421678 s, 9.7 MB/s 00:09:08.916 07:24:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.917 07:24:18 -- common/autotest_common.sh@884 -- # size=4096 00:09:08.917 07:24:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.917 07:24:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.917 07:24:18 -- common/autotest_common.sh@887 -- # return 0 00:09:08.917 07:24:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.917 07:24:18 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.917 07:24:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:09.174 07:24:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:09.174 07:24:18 -- common/autotest_common.sh@867 -- # local i 00:09:09.174 07:24:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:09.174 07:24:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:09.174 07:24:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:09.174 07:24:18 -- common/autotest_common.sh@871 -- # break 00:09:09.174 07:24:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:09.174 07:24:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:09.174 07:24:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.174 1+0 records in 00:09:09.174 1+0 records out 00:09:09.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054415 s, 7.5 MB/s 00:09:09.174 07:24:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.174 07:24:18 -- common/autotest_common.sh@884 -- # size=4096 00:09:09.174 07:24:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.174 07:24:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:09.174 07:24:18 -- common/autotest_common.sh@887 -- # return 0 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.174 07:24:18 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.432 07:24:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd0", 00:09:09.432 "bdev_name": "Nvme0n1p1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd1", 00:09:09.432 "bdev_name": "Nvme0n1p2" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd2", 00:09:09.432 "bdev_name": "Nvme1n1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd3", 00:09:09.432 "bdev_name": "Nvme2n1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd4", 00:09:09.432 "bdev_name": "Nvme2n2" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd5", 00:09:09.432 "bdev_name": "Nvme2n3" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd6", 00:09:09.432 "bdev_name": "Nvme3n1" 00:09:09.432 } 00:09:09.432 ]' 00:09:09.432 07:24:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:09.432 07:24:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:09.432 07:24:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd0", 00:09:09.432 "bdev_name": "Nvme0n1p1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd1", 00:09:09.432 "bdev_name": "Nvme0n1p2" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd2", 00:09:09.432 "bdev_name": "Nvme1n1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd3", 00:09:09.432 "bdev_name": "Nvme2n1" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd4", 00:09:09.432 "bdev_name": "Nvme2n2" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd5", 00:09:09.432 "bdev_name": "Nvme2n3" 00:09:09.432 }, 00:09:09.432 { 00:09:09.432 "nbd_device": "/dev/nbd6", 00:09:09.432 "bdev_name": "Nvme3n1" 00:09:09.433 } 00:09:09.433 ]' 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@51 -- # local i 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.433 07:24:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@41 -- # break 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@41 -- # break 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.691 07:24:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@41 -- # break 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.949 07:24:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@41 -- # break 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:10.206 07:24:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@41 -- # break 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@41 -- # break 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.464 07:24:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@41 -- # break 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.723 07:24:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@65 -- # true 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@65 -- # count=0 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@122 -- # count=0 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@127 -- # return 0 00:09:10.981 07:24:20 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@12 -- # local i 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:10.981 07:24:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:11.239 /dev/nbd0 00:09:11.239 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.239 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.239 07:24:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:11.239 07:24:20 -- common/autotest_common.sh@867 -- # local i 00:09:11.239 07:24:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:11.239 07:24:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:11.239 07:24:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:11.239 07:24:20 -- common/autotest_common.sh@871 -- # break 00:09:11.239 07:24:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:11.239 07:24:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:11.239 07:24:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.239 1+0 records in 00:09:11.239 1+0 records out 00:09:11.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393085 s, 10.4 MB/s 00:09:11.239 07:24:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.239 07:24:20 -- common/autotest_common.sh@884 -- # size=4096 00:09:11.239 07:24:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.239 07:24:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:11.239 07:24:20 -- common/autotest_common.sh@887 -- # return 0 00:09:11.239 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.239 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.239 07:24:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:11.239 /dev/nbd1 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:11.498 07:24:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:11.498 07:24:20 -- common/autotest_common.sh@867 -- # local i 00:09:11.498 07:24:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:11.498 07:24:20 -- common/autotest_common.sh@871 -- # break 00:09:11.498 07:24:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.498 1+0 records in 00:09:11.498 1+0 records out 00:09:11.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479241 s, 8.5 MB/s 00:09:11.498 07:24:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.498 07:24:20 -- common/autotest_common.sh@884 -- # size=4096 00:09:11.498 07:24:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.498 07:24:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:11.498 07:24:20 -- common/autotest_common.sh@887 -- # return 0 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:11.498 /dev/nbd10 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:11.498 07:24:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:11.498 07:24:20 -- common/autotest_common.sh@867 -- # local i 00:09:11.498 07:24:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:11.498 07:24:20 -- common/autotest_common.sh@871 -- # break 00:09:11.498 07:24:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:11.498 07:24:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.498 1+0 records in 00:09:11.498 1+0 records out 00:09:11.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342042 s, 12.0 MB/s 00:09:11.498 07:24:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.498 07:24:20 -- common/autotest_common.sh@884 -- # size=4096 00:09:11.498 07:24:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.498 07:24:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:11.498 07:24:20 -- common/autotest_common.sh@887 -- # return 0 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.498 07:24:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:11.757 /dev/nbd11 00:09:11.757 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:11.757 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:11.757 07:24:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:11.757 07:24:20 -- common/autotest_common.sh@867 -- # local i 00:09:11.757 07:24:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:11.757 07:24:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:11.757 07:24:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:11.757 07:24:20 -- common/autotest_common.sh@871 -- # break 00:09:11.757 07:24:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:11.757 07:24:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:11.757 07:24:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.757 1+0 records in 00:09:11.757 1+0 records out 00:09:11.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396161 s, 10.3 MB/s 00:09:11.757 07:24:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.757 07:24:20 -- common/autotest_common.sh@884 -- # size=4096 00:09:11.757 07:24:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.757 07:24:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:11.757 07:24:20 -- common/autotest_common.sh@887 -- # return 0 00:09:11.757 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.757 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.757 07:24:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:12.015 /dev/nbd12 00:09:12.015 07:24:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:12.015 07:24:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:12.015 07:24:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:12.015 07:24:21 -- common/autotest_common.sh@867 -- # local i 00:09:12.015 07:24:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.015 07:24:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.015 07:24:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:12.015 07:24:21 -- common/autotest_common.sh@871 -- # break 00:09:12.015 07:24:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.015 07:24:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.015 07:24:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.015 1+0 records in 00:09:12.015 1+0 records out 00:09:12.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323805 s, 12.6 MB/s 00:09:12.015 07:24:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.015 07:24:21 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.015 07:24:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.015 07:24:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.015 07:24:21 -- common/autotest_common.sh@887 -- # return 0 00:09:12.015 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.015 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.015 07:24:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:12.274 /dev/nbd13 00:09:12.274 07:24:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:12.274 07:24:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:12.274 07:24:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:12.274 07:24:21 -- common/autotest_common.sh@867 -- # local i 00:09:12.274 07:24:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.274 07:24:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.274 07:24:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:12.274 07:24:21 -- common/autotest_common.sh@871 -- # break 00:09:12.274 07:24:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.274 07:24:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.274 07:24:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.274 1+0 records in 00:09:12.274 1+0 records out 00:09:12.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462665 s, 8.9 MB/s 00:09:12.274 07:24:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.274 07:24:21 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.274 07:24:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.274 07:24:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.274 07:24:21 -- common/autotest_common.sh@887 -- # return 0 00:09:12.274 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.274 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.274 07:24:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:12.533 /dev/nbd14 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:12.533 07:24:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:12.533 07:24:21 -- common/autotest_common.sh@867 -- # local i 00:09:12.533 07:24:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.533 07:24:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.533 07:24:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:12.533 07:24:21 -- common/autotest_common.sh@871 -- # break 00:09:12.533 07:24:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.533 07:24:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.533 07:24:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.533 1+0 records in 00:09:12.533 1+0 records out 00:09:12.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053618 s, 7.6 MB/s 00:09:12.533 07:24:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.533 07:24:21 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.533 07:24:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.533 07:24:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.533 07:24:21 -- common/autotest_common.sh@887 -- # return 0 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.533 07:24:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd0", 00:09:12.533 "bdev_name": "Nvme0n1p1" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd1", 00:09:12.533 "bdev_name": "Nvme0n1p2" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd10", 00:09:12.533 "bdev_name": "Nvme1n1" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd11", 00:09:12.533 "bdev_name": "Nvme2n1" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd12", 00:09:12.533 "bdev_name": "Nvme2n2" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd13", 00:09:12.533 "bdev_name": "Nvme2n3" 00:09:12.533 }, 00:09:12.533 { 00:09:12.533 "nbd_device": "/dev/nbd14", 00:09:12.533 "bdev_name": "Nvme3n1" 00:09:12.533 } 00:09:12.533 ]' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd0", 00:09:12.792 "bdev_name": "Nvme0n1p1" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd1", 00:09:12.792 "bdev_name": "Nvme0n1p2" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd10", 00:09:12.792 "bdev_name": "Nvme1n1" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd11", 00:09:12.792 "bdev_name": "Nvme2n1" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd12", 00:09:12.792 "bdev_name": "Nvme2n2" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd13", 00:09:12.792 "bdev_name": "Nvme2n3" 00:09:12.792 }, 00:09:12.792 { 00:09:12.792 "nbd_device": "/dev/nbd14", 00:09:12.792 "bdev_name": "Nvme3n1" 00:09:12.792 } 00:09:12.792 ]' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.792 /dev/nbd1 00:09:12.792 /dev/nbd10 00:09:12.792 /dev/nbd11 00:09:12.792 /dev/nbd12 00:09:12.792 /dev/nbd13 00:09:12.792 /dev/nbd14' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.792 /dev/nbd1 00:09:12.792 /dev/nbd10 00:09:12.792 /dev/nbd11 00:09:12.792 /dev/nbd12 00:09:12.792 /dev/nbd13 00:09:12.792 /dev/nbd14' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@65 -- # count=7 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@95 -- # count=7 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:12.792 256+0 records in 00:09:12.792 256+0 records out 00:09:12.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721362 s, 145 MB/s 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.792 256+0 records in 00:09:12.792 256+0 records out 00:09:12.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102233 s, 10.3 MB/s 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.792 07:24:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:13.051 256+0 records in 00:09:13.051 256+0 records out 00:09:13.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149419 s, 7.0 MB/s 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:13.051 256+0 records in 00:09:13.051 256+0 records out 00:09:13.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0932976 s, 11.2 MB/s 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:13.051 256+0 records in 00:09:13.051 256+0 records out 00:09:13.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0979325 s, 10.7 MB/s 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.051 07:24:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:13.310 256+0 records in 00:09:13.310 256+0 records out 00:09:13.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0913706 s, 11.5 MB/s 00:09:13.310 07:24:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.310 07:24:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:13.310 256+0 records in 00:09:13.310 256+0 records out 00:09:13.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787084 s, 13.3 MB/s 00:09:13.310 07:24:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.310 07:24:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:13.310 256+0 records in 00:09:13.310 256+0 records out 00:09:13.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0790393 s, 13.3 MB/s 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.311 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@51 -- # local i 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@41 -- # break 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.569 07:24:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@41 -- # break 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.827 07:24:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@41 -- # break 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.086 07:24:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@41 -- # break 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@41 -- # break 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.344 07:24:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@41 -- # break 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.601 07:24:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.602 07:24:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@41 -- # break 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.859 07:24:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@65 -- # true 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@104 -- # count=0 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@109 -- # return 0 00:09:15.117 07:24:24 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:15.117 07:24:24 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:15.375 malloc_lvol_verify 00:09:15.375 07:24:24 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:15.375 5e4fd31d-39e4-448f-8540-b28b13fcba3c 00:09:15.375 07:24:24 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:15.632 179875dc-dcb1-4508-8e6a-184fb64c68b7 00:09:15.632 07:24:24 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:15.890 /dev/nbd0 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:15.890 mke2fs 1.47.0 (5-Feb-2023) 00:09:15.890 Discarding device blocks: 0/4096 done 00:09:15.890 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:15.890 00:09:15.890 Allocating group tables: 0/1 done 00:09:15.890 Writing inode tables: 0/1 done 00:09:15.890 Creating journal (1024 blocks): done 00:09:15.890 Writing superblocks and filesystem accounting information: 0/1 done 00:09:15.890 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@51 -- # local i 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.890 07:24:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.148 07:24:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@41 -- # break 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:16.149 07:24:25 -- bdev/nbd_common.sh@147 -- # return 0 00:09:16.149 07:24:25 -- bdev/blockdev.sh@324 -- # killprocess 62216 00:09:16.149 07:24:25 -- common/autotest_common.sh@936 -- # '[' -z 62216 ']' 00:09:16.149 07:24:25 -- common/autotest_common.sh@940 -- # kill -0 62216 00:09:16.149 07:24:25 -- common/autotest_common.sh@941 -- # uname 00:09:16.149 07:24:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.149 07:24:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62216 00:09:16.149 07:24:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:16.149 07:24:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:16.149 killing process with pid 62216 00:09:16.149 07:24:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62216' 00:09:16.149 07:24:25 -- common/autotest_common.sh@955 -- # kill 62216 00:09:16.149 07:24:25 -- common/autotest_common.sh@960 -- # wait 62216 00:09:17.082 07:24:26 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:17.082 00:09:17.082 real 0m10.802s 00:09:17.082 user 0m15.153s 00:09:17.082 sys 0m3.250s 00:09:17.082 07:24:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.082 07:24:26 -- common/autotest_common.sh@10 -- # set +x 00:09:17.082 ************************************ 00:09:17.082 END TEST bdev_nbd 00:09:17.082 ************************************ 00:09:17.082 07:24:26 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:17.082 07:24:26 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:17.082 07:24:26 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:17.082 07:24:26 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:17.082 skipping fio tests on NVMe due to multi-ns failures. 00:09:17.082 07:24:26 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:17.082 07:24:26 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:17.082 07:24:26 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:17.082 07:24:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.082 07:24:26 -- common/autotest_common.sh@10 -- # set +x 00:09:17.082 ************************************ 00:09:17.082 START TEST bdev_verify 00:09:17.082 ************************************ 00:09:17.082 07:24:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:17.082 [2024-11-19 07:24:26.140815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:17.082 [2024-11-19 07:24:26.140927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62632 ] 00:09:17.082 [2024-11-19 07:24:26.290056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.340 [2024-11-19 07:24:26.460215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.340 [2024-11-19 07:24:26.460255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.906 Running I/O for 5 seconds... 00:09:23.198 00:09:23.198 Latency(us) 00:09:23.198 [2024-11-19T07:24:32.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x5e800 00:09:23.198 Nvme0n1p1 : 5.05 2288.53 8.94 0.00 0.00 55749.01 8620.50 55251.89 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x5e800 length 0x5e800 00:09:23.198 Nvme0n1p1 : 5.05 2346.32 9.17 0.00 0.00 54352.53 10586.58 62511.26 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x5e7ff 00:09:23.198 Nvme0n1p2 : 5.06 2293.05 8.96 0.00 0.00 55647.18 5999.06 52832.10 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:23.198 Nvme0n1p2 : 5.06 2351.41 9.19 0.00 0.00 54211.86 4965.61 62107.96 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0xa0000 00:09:23.198 Nvme1n1 : 5.06 2292.30 8.95 0.00 0.00 55614.32 6251.13 50210.66 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0xa0000 length 0xa0000 00:09:23.198 Nvme1n1 : 5.06 2350.19 9.18 0.00 0.00 54168.35 6553.60 60494.77 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x80000 00:09:23.198 Nvme2n1 : 5.06 2291.74 8.95 0.00 0.00 55526.88 6351.95 51017.26 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x80000 length 0x80000 00:09:23.198 Nvme2n1 : 5.06 2348.71 9.17 0.00 0.00 54144.54 8519.68 62511.26 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x80000 00:09:23.198 Nvme2n2 : 5.06 2290.59 8.95 0.00 0.00 55489.43 8217.21 52025.50 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x80000 length 0x80000 00:09:23.198 Nvme2n2 : 5.07 2347.13 9.17 0.00 0.00 54103.55 10889.06 61301.37 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x80000 00:09:23.198 Nvme2n3 : 5.06 2289.17 8.94 0.00 0.00 55451.73 10637.00 52025.50 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x80000 length 0x80000 00:09:23.198 Nvme2n3 : 5.07 2345.68 9.16 0.00 0.00 54067.73 13409.67 60898.07 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x0 length 0x20000 00:09:23.198 Nvme3n1 : 5.07 2287.65 8.94 0.00 0.00 55422.77 12351.02 51218.90 00:09:23.198 [2024-11-19T07:24:32.448Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.198 Verification LBA range: start 0x20000 length 0x20000 00:09:23.198 Nvme3n1 : 5.07 2351.64 9.19 0.00 0.00 53938.52 2445.00 61704.66 00:09:23.198 [2024-11-19T07:24:32.448Z] =================================================================================================================== 00:09:23.198 [2024-11-19T07:24:32.448Z] Total : 32474.12 126.85 0.00 0.00 54839.79 2445.00 62511.26 00:09:27.489 00:09:27.489 real 0m9.814s 00:09:27.489 user 0m16.725s 00:09:27.489 sys 0m0.252s 00:09:27.489 ************************************ 00:09:27.489 END TEST bdev_verify 00:09:27.489 ************************************ 00:09:27.489 07:24:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.489 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:09:27.489 07:24:35 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.489 07:24:35 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:27.489 07:24:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.489 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:09:27.489 ************************************ 00:09:27.489 START TEST bdev_verify_big_io 00:09:27.489 ************************************ 00:09:27.489 07:24:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.489 [2024-11-19 07:24:36.024161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.489 [2024-11-19 07:24:36.024285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62753 ] 00:09:27.489 [2024-11-19 07:24:36.174031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.489 [2024-11-19 07:24:36.359705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.489 [2024-11-19 07:24:36.359783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.056 Running I/O for 5 seconds... 00:09:34.611 00:09:34.611 Latency(us) 00:09:34.611 [2024-11-19T07:24:43.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x5e80 00:09:34.611 Nvme0n1p1 : 5.38 218.38 13.65 0.00 0.00 571710.49 44967.78 1206669.00 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x5e80 length 0x5e80 00:09:34.611 Nvme0n1p1 : 5.38 250.97 15.69 0.00 0.00 502117.72 37305.11 777559.43 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x5e7f 00:09:34.611 Nvme0n1p2 : 5.46 222.47 13.90 0.00 0.00 546924.54 73803.62 1096971.82 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:34.611 Nvme0n1p2 : 5.38 250.86 15.68 0.00 0.00 495597.98 38111.70 709805.29 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0xa000 00:09:34.611 Nvme1n1 : 5.49 228.34 14.27 0.00 0.00 520245.84 27827.59 974369.08 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0xa000 length 0xa000 00:09:34.611 Nvme1n1 : 5.38 250.79 15.67 0.00 0.00 488989.44 38918.30 664635.86 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x8000 00:09:34.611 Nvme2n1 : 5.52 244.31 15.27 0.00 0.00 476314.88 16736.89 851766.35 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x8000 length 0x8000 00:09:34.611 Nvme2n1 : 5.42 256.60 16.04 0.00 0.00 471410.09 39523.25 609787.27 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x8000 00:09:34.611 Nvme2n2 : 5.59 272.77 17.05 0.00 0.00 417481.93 10032.05 738842.78 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x8000 length 0x8000 00:09:34.611 Nvme2n2 : 5.42 256.53 16.03 0.00 0.00 464884.69 40128.20 558165.07 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x8000 00:09:34.611 Nvme2n3 : 5.70 335.09 20.94 0.00 0.00 333018.61 5747.00 645277.54 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x8000 length 0x8000 00:09:34.611 Nvme2n3 : 5.46 270.67 16.92 0.00 0.00 436543.33 13409.67 503316.48 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x0 length 0x2000 00:09:34.611 Nvme3n1 : 5.79 457.10 28.57 0.00 0.00 240074.34 86.65 896935.78 00:09:34.611 [2024-11-19T07:24:43.861Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.611 Verification LBA range: start 0x2000 length 0x2000 00:09:34.611 Nvme3n1 : 5.47 286.60 17.91 0.00 0.00 407638.84 2394.58 500090.09 00:09:34.611 [2024-11-19T07:24:43.861Z] =================================================================================================================== 00:09:34.611 [2024-11-19T07:24:43.861Z] Total : 3801.47 237.59 0.00 0.00 435871.64 86.65 1206669.00 00:09:35.610 00:09:35.610 real 0m8.719s 00:09:35.610 user 0m15.571s 00:09:35.610 sys 0m0.251s 00:09:35.610 07:24:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.610 ************************************ 00:09:35.610 END TEST bdev_verify_big_io 00:09:35.610 ************************************ 00:09:35.610 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:09:35.610 07:24:44 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.610 07:24:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:35.610 07:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.610 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:09:35.610 ************************************ 00:09:35.610 START TEST bdev_write_zeroes 00:09:35.610 ************************************ 00:09:35.610 07:24:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.610 [2024-11-19 07:24:44.775454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.610 [2024-11-19 07:24:44.775563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62868 ] 00:09:35.868 [2024-11-19 07:24:44.923036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.868 [2024-11-19 07:24:45.058734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.433 Running I/O for 1 seconds... 00:09:37.365 00:09:37.365 Latency(us) 00:09:37.365 [2024-11-19T07:24:46.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme0n1p1 : 1.01 9608.18 37.53 0.00 0.00 13283.40 6856.07 22685.54 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme0n1p2 : 1.01 9596.45 37.49 0.00 0.00 13281.63 6301.54 27021.00 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme1n1 : 1.01 9585.63 37.44 0.00 0.00 13246.11 9729.58 22483.89 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme2n1 : 1.02 9617.54 37.57 0.00 0.00 13190.37 8267.62 22483.89 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme2n2 : 1.02 9606.76 37.53 0.00 0.00 13163.46 8670.92 22282.24 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme2n3 : 1.02 9648.46 37.69 0.00 0.00 13083.45 4436.28 22383.06 00:09:37.365 [2024-11-19T07:24:46.615Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.365 Nvme3n1 : 1.02 9575.10 37.40 0.00 0.00 13155.68 4864.79 22483.89 00:09:37.365 [2024-11-19T07:24:46.615Z] =================================================================================================================== 00:09:37.365 [2024-11-19T07:24:46.615Z] Total : 67238.12 262.65 0.00 0.00 13200.28 4436.28 27021.00 00:09:38.296 00:09:38.296 real 0m2.719s 00:09:38.296 user 0m2.439s 00:09:38.296 sys 0m0.167s 00:09:38.297 07:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.297 07:24:47 -- common/autotest_common.sh@10 -- # set +x 00:09:38.297 ************************************ 00:09:38.297 END TEST bdev_write_zeroes 00:09:38.297 ************************************ 00:09:38.297 07:24:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.297 07:24:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:38.297 07:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.297 07:24:47 -- common/autotest_common.sh@10 -- # set +x 00:09:38.297 ************************************ 00:09:38.297 START TEST bdev_json_nonenclosed 00:09:38.297 ************************************ 00:09:38.297 07:24:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.297 [2024-11-19 07:24:47.532308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:38.297 [2024-11-19 07:24:47.532414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62921 ] 00:09:38.555 [2024-11-19 07:24:47.681369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.813 [2024-11-19 07:24:47.860419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.813 [2024-11-19 07:24:47.860561] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:38.813 [2024-11-19 07:24:47.860583] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.071 00:09:39.071 real 0m0.661s 00:09:39.071 user 0m0.470s 00:09:39.071 sys 0m0.087s 00:09:39.071 07:24:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.071 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.071 ************************************ 00:09:39.071 END TEST bdev_json_nonenclosed 00:09:39.071 ************************************ 00:09:39.071 07:24:48 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:39.071 07:24:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:39.071 07:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.071 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.071 ************************************ 00:09:39.071 START TEST bdev_json_nonarray 00:09:39.071 ************************************ 00:09:39.071 07:24:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:39.071 [2024-11-19 07:24:48.231063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.071 [2024-11-19 07:24:48.231174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62941 ] 00:09:39.329 [2024-11-19 07:24:48.378023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.329 [2024-11-19 07:24:48.565829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.329 [2024-11-19 07:24:48.565975] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:39.329 [2024-11-19 07:24:48.565992] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.893 00:09:39.894 real 0m0.672s 00:09:39.894 user 0m0.469s 00:09:39.894 sys 0m0.097s 00:09:39.894 07:24:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.894 ************************************ 00:09:39.894 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.894 END TEST bdev_json_nonarray 00:09:39.894 ************************************ 00:09:39.894 07:24:48 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:39.894 07:24:48 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:39.894 07:24:48 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:39.894 07:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:39.894 07:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.894 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.894 ************************************ 00:09:39.894 START TEST bdev_gpt_uuid 00:09:39.894 ************************************ 00:09:39.894 07:24:48 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:39.894 07:24:48 -- bdev/blockdev.sh@612 -- # local bdev 00:09:39.894 07:24:48 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:39.894 07:24:48 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62972 00:09:39.894 07:24:48 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:39.894 07:24:48 -- bdev/blockdev.sh@47 -- # waitforlisten 62972 00:09:39.894 07:24:48 -- common/autotest_common.sh@829 -- # '[' -z 62972 ']' 00:09:39.894 07:24:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.894 07:24:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.894 07:24:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.894 07:24:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.894 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:39.894 07:24:48 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:39.894 [2024-11-19 07:24:48.950488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.894 [2024-11-19 07:24:48.950595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62972 ] 00:09:39.894 [2024-11-19 07:24:49.098414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.151 [2024-11-19 07:24:49.267938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.151 [2024-11-19 07:24:49.268136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.529 07:24:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.529 07:24:50 -- common/autotest_common.sh@862 -- # return 0 00:09:41.529 07:24:50 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.529 07:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.529 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 Some configs were skipped because the RPC state that can call them passed over. 00:09:41.529 07:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.529 07:24:50 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:41.529 07:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.529 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 07:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.529 07:24:50 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:41.529 07:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.529 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 07:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.529 07:24:50 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:41.529 { 00:09:41.529 "name": "Nvme0n1p1", 00:09:41.529 "aliases": [ 00:09:41.529 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:41.529 ], 00:09:41.529 "product_name": "GPT Disk", 00:09:41.529 "block_size": 4096, 00:09:41.529 "num_blocks": 774144, 00:09:41.529 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:41.529 "md_size": 64, 00:09:41.529 "md_interleave": false, 00:09:41.529 "dif_type": 0, 00:09:41.529 "assigned_rate_limits": { 00:09:41.529 "rw_ios_per_sec": 0, 00:09:41.529 "rw_mbytes_per_sec": 0, 00:09:41.529 "r_mbytes_per_sec": 0, 00:09:41.529 "w_mbytes_per_sec": 0 00:09:41.529 }, 00:09:41.529 "claimed": false, 00:09:41.529 "zoned": false, 00:09:41.529 "supported_io_types": { 00:09:41.529 "read": true, 00:09:41.529 "write": true, 00:09:41.529 "unmap": true, 00:09:41.529 "write_zeroes": true, 00:09:41.530 "flush": true, 00:09:41.530 "reset": true, 00:09:41.530 "compare": true, 00:09:41.530 "compare_and_write": false, 00:09:41.530 "abort": true, 00:09:41.530 "nvme_admin": false, 00:09:41.530 "nvme_io": false 00:09:41.530 }, 00:09:41.530 "driver_specific": { 00:09:41.530 "gpt": { 00:09:41.530 "base_bdev": "Nvme0n1", 00:09:41.530 "offset_blocks": 256, 00:09:41.530 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:41.530 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:41.530 "partition_name": "SPDK_TEST_first" 00:09:41.530 } 00:09:41.530 } 00:09:41.530 } 00:09:41.530 ]' 00:09:41.530 07:24:50 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:41.790 07:24:50 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:41.790 07:24:50 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:41.790 07:24:50 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:41.790 07:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.790 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.790 07:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:41.790 { 00:09:41.790 "name": "Nvme0n1p2", 00:09:41.790 "aliases": [ 00:09:41.790 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:41.790 ], 00:09:41.790 "product_name": "GPT Disk", 00:09:41.790 "block_size": 4096, 00:09:41.790 "num_blocks": 774143, 00:09:41.790 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:41.790 "md_size": 64, 00:09:41.790 "md_interleave": false, 00:09:41.790 "dif_type": 0, 00:09:41.790 "assigned_rate_limits": { 00:09:41.790 "rw_ios_per_sec": 0, 00:09:41.790 "rw_mbytes_per_sec": 0, 00:09:41.790 "r_mbytes_per_sec": 0, 00:09:41.790 "w_mbytes_per_sec": 0 00:09:41.790 }, 00:09:41.790 "claimed": false, 00:09:41.790 "zoned": false, 00:09:41.790 "supported_io_types": { 00:09:41.790 "read": true, 00:09:41.790 "write": true, 00:09:41.790 "unmap": true, 00:09:41.790 "write_zeroes": true, 00:09:41.790 "flush": true, 00:09:41.790 "reset": true, 00:09:41.790 "compare": true, 00:09:41.790 "compare_and_write": false, 00:09:41.790 "abort": true, 00:09:41.790 "nvme_admin": false, 00:09:41.790 "nvme_io": false 00:09:41.790 }, 00:09:41.790 "driver_specific": { 00:09:41.790 "gpt": { 00:09:41.790 "base_bdev": "Nvme0n1", 00:09:41.790 "offset_blocks": 774400, 00:09:41.790 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:41.790 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:41.790 "partition_name": "SPDK_TEST_second" 00:09:41.790 } 00:09:41.790 } 00:09:41.790 } 00:09:41.790 ]' 00:09:41.790 07:24:50 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:41.790 07:24:50 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:41.790 07:24:50 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:41.790 07:24:50 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:41.790 07:24:50 -- bdev/blockdev.sh@629 -- # killprocess 62972 00:09:41.790 07:24:50 -- common/autotest_common.sh@936 -- # '[' -z 62972 ']' 00:09:41.790 07:24:50 -- common/autotest_common.sh@940 -- # kill -0 62972 00:09:41.790 07:24:50 -- common/autotest_common.sh@941 -- # uname 00:09:41.790 07:24:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:41.790 07:24:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62972 00:09:41.790 07:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:41.790 killing process with pid 62972 00:09:41.790 07:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:41.790 07:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62972' 00:09:41.790 07:24:51 -- common/autotest_common.sh@955 -- # kill 62972 00:09:41.790 07:24:51 -- common/autotest_common.sh@960 -- # wait 62972 00:09:43.705 00:09:43.705 real 0m3.583s 00:09:43.705 user 0m3.876s 00:09:43.705 sys 0m0.363s 00:09:43.705 07:24:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.705 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 ************************************ 00:09:43.705 END TEST bdev_gpt_uuid 00:09:43.705 ************************************ 00:09:43.705 07:24:52 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:43.705 07:24:52 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:43.705 07:24:52 -- bdev/blockdev.sh@809 -- # cleanup 00:09:43.705 07:24:52 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:43.705 07:24:52 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.705 07:24:52 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:43.705 07:24:52 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:43.705 07:24:52 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:43.705 07:24:52 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:43.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.705 Waiting for block devices as requested 00:09:43.705 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.967 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.967 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.967 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.242 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:49.242 07:24:58 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:49.242 07:24:58 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:49.242 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:49.242 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:49.242 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:49.242 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:49.242 07:24:58 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:49.242 00:09:49.242 real 0m59.148s 00:09:49.242 user 1m14.521s 00:09:49.242 sys 0m7.391s 00:09:49.242 07:24:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:49.242 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.242 ************************************ 00:09:49.242 END TEST blockdev_nvme_gpt 00:09:49.242 ************************************ 00:09:49.500 07:24:58 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:49.500 07:24:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.500 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.500 ************************************ 00:09:49.500 START TEST nvme 00:09:49.500 ************************************ 00:09:49.500 07:24:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:49.500 * Looking for test storage... 00:09:49.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.500 07:24:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:49.500 07:24:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:49.500 07:24:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:49.500 07:24:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:49.500 07:24:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:49.500 07:24:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:49.500 07:24:58 -- scripts/common.sh@335 -- # IFS=.-: 00:09:49.500 07:24:58 -- scripts/common.sh@335 -- # read -ra ver1 00:09:49.500 07:24:58 -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.500 07:24:58 -- scripts/common.sh@336 -- # read -ra ver2 00:09:49.500 07:24:58 -- scripts/common.sh@337 -- # local 'op=<' 00:09:49.500 07:24:58 -- scripts/common.sh@339 -- # ver1_l=2 00:09:49.500 07:24:58 -- scripts/common.sh@340 -- # ver2_l=1 00:09:49.500 07:24:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:49.500 07:24:58 -- scripts/common.sh@343 -- # case "$op" in 00:09:49.500 07:24:58 -- scripts/common.sh@344 -- # : 1 00:09:49.500 07:24:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:49.500 07:24:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.500 07:24:58 -- scripts/common.sh@364 -- # decimal 1 00:09:49.500 07:24:58 -- scripts/common.sh@352 -- # local d=1 00:09:49.500 07:24:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.500 07:24:58 -- scripts/common.sh@354 -- # echo 1 00:09:49.500 07:24:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:49.500 07:24:58 -- scripts/common.sh@365 -- # decimal 2 00:09:49.500 07:24:58 -- scripts/common.sh@352 -- # local d=2 00:09:49.500 07:24:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.500 07:24:58 -- scripts/common.sh@354 -- # echo 2 00:09:49.500 07:24:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:49.500 07:24:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:49.500 07:24:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:49.500 07:24:58 -- scripts/common.sh@367 -- # return 0 00:09:49.500 07:24:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:49.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.500 --rc genhtml_branch_coverage=1 00:09:49.500 --rc genhtml_function_coverage=1 00:09:49.500 --rc genhtml_legend=1 00:09:49.500 --rc geninfo_all_blocks=1 00:09:49.500 --rc geninfo_unexecuted_blocks=1 00:09:49.500 00:09:49.500 ' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:49.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.500 --rc genhtml_branch_coverage=1 00:09:49.500 --rc genhtml_function_coverage=1 00:09:49.500 --rc genhtml_legend=1 00:09:49.500 --rc geninfo_all_blocks=1 00:09:49.500 --rc geninfo_unexecuted_blocks=1 00:09:49.500 00:09:49.500 ' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:49.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.500 --rc genhtml_branch_coverage=1 00:09:49.500 --rc genhtml_function_coverage=1 00:09:49.500 --rc genhtml_legend=1 00:09:49.500 --rc geninfo_all_blocks=1 00:09:49.500 --rc geninfo_unexecuted_blocks=1 00:09:49.500 00:09:49.500 ' 00:09:49.500 07:24:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:49.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.501 --rc genhtml_branch_coverage=1 00:09:49.501 --rc genhtml_function_coverage=1 00:09:49.501 --rc genhtml_legend=1 00:09:49.501 --rc geninfo_all_blocks=1 00:09:49.501 --rc geninfo_unexecuted_blocks=1 00:09:49.501 00:09:49.501 ' 00:09:49.501 07:24:58 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.438 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.438 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.438 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.438 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.438 07:24:59 -- nvme/nvme.sh@79 -- # uname 00:09:50.438 07:24:59 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:50.438 07:24:59 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:50.438 07:24:59 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:50.438 07:24:59 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:50.438 07:24:59 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:50.438 07:24:59 -- common/autotest_common.sh@1055 -- # echo 0 00:09:50.438 07:24:59 -- common/autotest_common.sh@1057 -- # stubpid=63644 00:09:50.438 Waiting for stub to ready for secondary processes... 00:09:50.438 07:24:59 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:50.438 07:24:59 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:50.438 07:24:59 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:50.438 07:24:59 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63644 ]] 00:09:50.438 07:24:59 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:50.697 [2024-11-19 07:24:59.712825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:50.697 [2024-11-19 07:24:59.712933] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.263 [2024-11-19 07:25:00.455072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.522 [2024-11-19 07:25:00.622731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.522 [2024-11-19 07:25:00.622896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.522 [2024-11-19 07:25:00.622914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.522 [2024-11-19 07:25:00.641098] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.522 [2024-11-19 07:25:00.649396] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:51.522 [2024-11-19 07:25:00.649538] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:51.522 [2024-11-19 07:25:00.658442] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.522 [2024-11-19 07:25:00.658600] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:51.522 [2024-11-19 07:25:00.658712] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:51.522 [2024-11-19 07:25:00.665471] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.522 [2024-11-19 07:25:00.665597] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:51.522 [2024-11-19 07:25:00.665679] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:51.522 [2024-11-19 07:25:00.672381] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:51.522 [2024-11-19 07:25:00.672501] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:51.522 [2024-11-19 07:25:00.672595] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:51.522 [2024-11-19 07:25:00.672687] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:51.522 [2024-11-19 07:25:00.672845] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:51.522 07:25:00 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:51.522 done. 00:09:51.522 07:25:00 -- common/autotest_common.sh@1064 -- # echo done. 00:09:51.522 07:25:00 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.522 07:25:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:51.522 07:25:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.522 07:25:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.522 ************************************ 00:09:51.522 START TEST nvme_reset 00:09:51.522 ************************************ 00:09:51.522 07:25:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.782 Initializing NVMe Controllers 00:09:51.782 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:51.782 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:51.782 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:51.782 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:51.782 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:51.782 00:09:51.782 real 0m0.213s 00:09:51.782 user 0m0.052s 00:09:51.782 sys 0m0.104s 00:09:51.782 07:25:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.782 07:25:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 ************************************ 00:09:51.782 END TEST nvme_reset 00:09:51.782 ************************************ 00:09:51.782 07:25:00 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:51.782 07:25:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.782 07:25:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.782 07:25:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 ************************************ 00:09:51.782 START TEST nvme_identify 00:09:51.782 ************************************ 00:09:51.782 07:25:00 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:51.782 07:25:00 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:51.782 07:25:00 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:51.782 07:25:00 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.782 07:25:00 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:51.782 07:25:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:51.782 07:25:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:51.782 07:25:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.782 07:25:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:51.782 07:25:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.782 07:25:00 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:51.782 07:25:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:51.782 07:25:00 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:52.043 ===================================================== 00:09:52.043 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.043 ===================================================== 00:09:52.043 Controller Capabilities/Features 00:09:52.043 ================================ 00:09:52.043 Vendor ID: 1b36 00:09:52.043 Subsystem Vendor ID: 1af4 00:09:52.043 Serial Number: 12340 00:09:52.043 Model Number: QEMU NVMe Ctrl 00:09:52.043 Firmware Version: 8.0.0 00:09:52.043 Recommended Arb Burst: 6 00:09:52.043 IEEE OUI Identifier: 00 54 52 00:09:52.043 Multi-path I/O 00:09:52.043 May have multiple subsystem ports: No 00:09:52.043 May have multiple controllers: No 00:09:52.043 Associated with SR-IOV VF: No 00:09:52.043 Max Data Transfer Size: 524288 00:09:52.043 Max Number of Namespaces: 256 00:09:52.043 Max Number of I/O Queues: 64 00:09:52.043 NVMe Specification Version (VS): 1.4 00:09:52.043 NVMe Specification Version (Identify): 1.4 00:09:52.043 Maximum Queue Entries: 2048 00:09:52.043 Contiguous Queues Required: Yes 00:09:52.043 Arbitration Mechanisms Supported 00:09:52.043 Weighted Round Robin: Not Supported 00:09:52.043 Vendor Specific: Not Supported 00:09:52.043 Reset Timeout: 7500 ms 00:09:52.043 Doorbell Stride: 4 bytes 00:09:52.043 NVM Subsystem Reset: Not Supported 00:09:52.043 Command Sets Supported 00:09:52.043 NVM Command Set: Supported 00:09:52.043 Boot Partition: Not Supported 00:09:52.043 Memory Page Size Minimum: 4096 bytes 00:09:52.043 Memory Page Size Maximum: 65536 bytes 00:09:52.043 Persistent Memory Region: Not Supported 00:09:52.043 Optional Asynchronous Events Supported 00:09:52.043 Namespace Attribute Notices: Supported 00:09:52.043 Firmware Activation Notices: Not Supported 00:09:52.043 ANA Change Notices: Not Supported 00:09:52.043 PLE Aggregate Log Change Notices: Not Supported 00:09:52.043 LBA Status Info Alert Notices: Not Supported 00:09:52.043 EGE Aggregate Log Change Notices: Not Supported 00:09:52.043 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.043 Zone Descriptor Change Notices: Not Supported 00:09:52.043 Discovery Log Change Notices: Not Supported 00:09:52.043 Controller Attributes 00:09:52.043 128-bit Host Identifier: Not Supported 00:09:52.043 Non-Operational Permissive Mode: Not Supported 00:09:52.043 NVM Sets: Not Supported 00:09:52.043 Read Recovery Levels: Not Supported 00:09:52.043 Endurance Groups: Not Supported 00:09:52.043 Predictable Latency Mode: Not Supported 00:09:52.043 Traffic Based Keep ALive: Not Supported 00:09:52.043 Namespace Granularity: Not Supported 00:09:52.043 SQ Associations: Not Supported 00:09:52.043 UUID List: Not Supported 00:09:52.043 Multi-Domain Subsystem: Not Supported 00:09:52.043 Fixed Capacity Management: Not Supported 00:09:52.043 Variable Capacity Management: Not Supported 00:09:52.043 Delete Endurance Group: Not Supported 00:09:52.043 Delete NVM Set: Not Supported 00:09:52.043 Extended LBA Formats Supported: Supported 00:09:52.044 Flexible Data Placement Supported: Not Supported 00:09:52.044 00:09:52.044 Controller Memory Buffer Support 00:09:52.044 ================================ 00:09:52.044 Supported: No 00:09:52.044 00:09:52.044 Persistent Memory Region Support 00:09:52.044 ================================ 00:09:52.044 Supported: No 00:09:52.044 00:09:52.044 Admin Command Set Attributes 00:09:52.044 ============================ 00:09:52.044 Security Send/Receive: Not Supported 00:09:52.044 Format NVM: Supported 00:09:52.044 Firmware Activate/Download: Not Supported 00:09:52.044 Namespace Management: Supported 00:09:52.044 Device Self-Test: Not Supported 00:09:52.044 Directives: Supported 00:09:52.044 NVMe-MI: Not Supported 00:09:52.044 Virtualization Management: Not Supported 00:09:52.044 Doorbell Buffer Config: Supported 00:09:52.044 Get LBA Status Capability: Not Supported 00:09:52.044 Command & Feature Lockdown Capability: Not Supported 00:09:52.044 Abort Command Limit: 4 00:09:52.044 Async Event Request Limit: 4 00:09:52.044 Number of Firmware Slots: N/A 00:09:52.044 Firmware Slot 1 Read-Only: N/A 00:09:52.044 Firmware Activation Without Reset: N/A 00:09:52.044 Multiple Update Detection Support: N/A 00:09:52.044 Firmware Update Granularity: No Information Provided 00:09:52.044 Per-Namespace SMART Log: Yes 00:09:52.044 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.044 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.044 Command Effects Log Page: Supported 00:09:52.044 Get Log Page Extended Data: Supported 00:09:52.044 Telemetry Log Pages: Not Supported 00:09:52.044 Persistent Event Log Pages: Not Supported 00:09:52.044 Supported Log Pages Log Page: May Support 00:09:52.044 Commands Supported & Effects Log Page: Not Supported 00:09:52.044 Feature Identifiers & Effects Log Page:May Support 00:09:52.044 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.044 Data Area 4 for Telemetry Log: Not Supported 00:09:52.044 Error Log Page Entries Supported: 1 00:09:52.044 Keep Alive: Not Supported 00:09:52.044 00:09:52.044 NVM Command Set Attributes 00:09:52.044 ========================== 00:09:52.044 Submission Queue Entry Size 00:09:52.044 Max: 64 00:09:52.044 Min: 64 00:09:52.044 Completion Queue Entry Size 00:09:52.044 Max: 16 00:09:52.044 Min: 16 00:09:52.044 Number of Namespaces: 256 00:09:52.044 Compare Command: Supported 00:09:52.044 Write Uncorrectable Command: Not Supported 00:09:52.044 Dataset Management Command: Supported 00:09:52.044 Write Zeroes Command: Supported 00:09:52.044 Set Features Save Field: Supported 00:09:52.044 Reservations: Not Supported 00:09:52.044 Timestamp: Supported 00:09:52.044 Copy: Supported 00:09:52.044 Volatile Write Cache: Present 00:09:52.044 Atomic Write Unit (Normal): 1 00:09:52.044 Atomic Write Unit (PFail): 1 00:09:52.044 Atomic Compare & Write Unit: 1 00:09:52.044 Fused Compare & Write: Not Supported 00:09:52.044 Scatter-Gather List 00:09:52.044 SGL Command Set: Supported 00:09:52.044 SGL Keyed: Not Supported 00:09:52.044 SGL Bit Bucket Descriptor: Not Supported 00:09:52.044 SGL Metadata Pointer: Not Supported 00:09:52.044 Oversized SGL: Not Supported 00:09:52.044 SGL Metadata Address: Not Supported 00:09:52.044 SGL Offset: Not Supported 00:09:52.044 Transport SGL Data Block: Not Supported 00:09:52.044 Replay Protected Memory Block: Not Supported 00:09:52.044 00:09:52.044 Firmware Slot Information 00:09:52.044 ========================= 00:09:52.044 Active slot: 1 00:09:52.044 Slot 1 Firmware Revision: 1.0 00:09:52.044 00:09:52.044 00:09:52.044 Commands Supported and Effects 00:09:52.044 ============================== 00:09:52.044 Admin Commands 00:09:52.044 -------------- 00:09:52.044 Delete I/O Submission Queue (00h): Supported 00:09:52.044 Create I/O Submission Queue (01h): Supported 00:09:52.044 Get Log Page (02h): Supported 00:09:52.044 Delete I/O Completion Queue (04h): Supported 00:09:52.044 Create I/O Completion Queue (05h): Supported 00:09:52.044 Identify (06h): Supported 00:09:52.044 Abort (08h): Supported 00:09:52.044 Set Features (09h): Supported 00:09:52.044 Get Features (0Ah): Supported 00:09:52.044 Asynchronous Event Request (0Ch): Supported 00:09:52.044 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.044 Directive Send (19h): Supported 00:09:52.044 Directive Receive (1Ah): Supported 00:09:52.044 Virtualization Management (1Ch): Supported 00:09:52.044 Doorbell Buffer Config (7Ch): Supported 00:09:52.044 Format NVM (80h): Supported LBA-Change 00:09:52.044 I/O Commands 00:09:52.044 ------------ 00:09:52.044 Flush (00h): Supported LBA-Change 00:09:52.044 Write (01h): Supported LBA-Change 00:09:52.044 Read (02h): Supported 00:09:52.044 Compare (05h): Supported 00:09:52.044 Write Zeroes (08h): Supported LBA-Change 00:09:52.044 Dataset Management (09h): Supported LBA-Change 00:09:52.044 Unknown (0Ch): Supported 00:09:52.044 Unknown (12h): Supported 00:09:52.044 Copy (19h): Supported LBA-Change 00:09:52.044 Unknown (1Dh): Supported LBA-Change 00:09:52.044 00:09:52.045 Error Log 00:09:52.045 ========= 00:09:52.045 00:09:52.045 Arbitration 00:09:52.045 =========== 00:09:52.045 Arbitration Burst: no limit 00:09:52.045 00:09:52.045 Power Management 00:09:52.045 ================ 00:09:52.045 Number of Power States: 1 00:09:52.045 Current Power State: Power State #0 00:09:52.045 Power State #0: 00:09:52.045 Max Power: 25.00 W 00:09:52.045 Non-Operational State: Operational 00:09:52.045 Entry Latency: 16 microseconds 00:09:52.045 Exit Latency: 4 microseconds 00:09:52.045 Relative Read Throughput: 0 00:09:52.045 Relative Read Latency: 0 00:09:52.045 Relative Write Throughput: 0 00:09:52.045 Relative Write Latency: 0 00:09:52.045 Idle Power[2024-11-19 07:25:01.149264] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63674 terminated unexpected 00:09:52.045 [2024-11-19 07:25:01.150226] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63674 terminated unexpected 00:09:52.045 : Not Reported 00:09:52.045 Active Power: Not Reported 00:09:52.045 Non-Operational Permissive Mode: Not Supported 00:09:52.045 00:09:52.045 Health Information 00:09:52.045 ================== 00:09:52.045 Critical Warnings: 00:09:52.045 Available Spare Space: OK 00:09:52.045 Temperature: OK 00:09:52.045 Device Reliability: OK 00:09:52.045 Read Only: No 00:09:52.045 Volatile Memory Backup: OK 00:09:52.045 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.045 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.045 Available Spare: 0% 00:09:52.045 Available Spare Threshold: 0% 00:09:52.045 Life Percentage Used: 0% 00:09:52.045 Data Units Read: 1912 00:09:52.045 Data Units Written: 885 00:09:52.045 Host Read Commands: 94329 00:09:52.045 Host Write Commands: 46879 00:09:52.045 Controller Busy Time: 0 minutes 00:09:52.045 Power Cycles: 0 00:09:52.045 Power On Hours: 0 hours 00:09:52.045 Unsafe Shutdowns: 0 00:09:52.045 Unrecoverable Media Errors: 0 00:09:52.045 Lifetime Error Log Entries: 0 00:09:52.045 Warning Temperature Time: 0 minutes 00:09:52.045 Critical Temperature Time: 0 minutes 00:09:52.045 00:09:52.045 Number of Queues 00:09:52.045 ================ 00:09:52.045 Number of I/O Submission Queues: 64 00:09:52.045 Number of I/O Completion Queues: 64 00:09:52.045 00:09:52.045 ZNS Specific Controller Data 00:09:52.045 ============================ 00:09:52.045 Zone Append Size Limit: 0 00:09:52.045 00:09:52.045 00:09:52.045 Active Namespaces 00:09:52.045 ================= 00:09:52.045 Namespace ID:1 00:09:52.045 Error Recovery Timeout: Unlimited 00:09:52.045 Command Set Identifier: NVM (00h) 00:09:52.045 Deallocate: Supported 00:09:52.045 Deallocated/Unwritten Error: Supported 00:09:52.045 Deallocated Read Value: All 0x00 00:09:52.045 Deallocate in Write Zeroes: Not Supported 00:09:52.045 Deallocated Guard Field: 0xFFFF 00:09:52.045 Flush: Supported 00:09:52.045 Reservation: Not Supported 00:09:52.045 Metadata Transferred as: Separate Metadata Buffer 00:09:52.045 Namespace Sharing Capabilities: Private 00:09:52.045 Size (in LBAs): 1548666 (5GiB) 00:09:52.045 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.045 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.045 Thin Provisioning: Not Supported 00:09:52.045 Per-NS Atomic Units: No 00:09:52.045 Maximum Single Source Range Length: 128 00:09:52.045 Maximum Copy Length: 128 00:09:52.045 Maximum Source Range Count: 128 00:09:52.045 NGUID/EUI64 Never Reused: No 00:09:52.045 Namespace Write Protected: No 00:09:52.045 Number of LBA Formats: 8 00:09:52.045 Current LBA Format: LBA Format #07 00:09:52.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.045 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.045 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.045 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.045 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.045 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.045 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.045 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.045 00:09:52.045 ===================================================== 00:09:52.045 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.045 ===================================================== 00:09:52.045 Controller Capabilities/Features 00:09:52.045 ================================ 00:09:52.045 Vendor ID: 1b36 00:09:52.045 Subsystem Vendor ID: 1af4 00:09:52.045 Serial Number: 12341 00:09:52.045 Model Number: QEMU NVMe Ctrl 00:09:52.045 Firmware Version: 8.0.0 00:09:52.045 Recommended Arb Burst: 6 00:09:52.045 IEEE OUI Identifier: 00 54 52 00:09:52.045 Multi-path I/O 00:09:52.045 May have multiple subsystem ports: No 00:09:52.045 May have multiple controllers: No 00:09:52.045 Associated with SR-IOV VF: No 00:09:52.045 Max Data Transfer Size: 524288 00:09:52.045 Max Number of Namespaces: 256 00:09:52.045 Max Number of I/O Queues: 64 00:09:52.045 NVMe Specification Version (VS): 1.4 00:09:52.045 NVMe Specification Version (Identify): 1.4 00:09:52.045 Maximum Queue Entries: 2048 00:09:52.045 Contiguous Queues Required: Yes 00:09:52.045 Arbitration Mechanisms Supported 00:09:52.045 Weighted Round Robin: Not Supported 00:09:52.045 Vendor Specific: Not Supported 00:09:52.045 Reset Timeout: 7500 ms 00:09:52.045 Doorbell Stride: 4 bytes 00:09:52.045 NVM Subsystem Reset: Not Supported 00:09:52.045 Command Sets Supported 00:09:52.045 NVM Command Set: Supported 00:09:52.045 Boot Partition: Not Supported 00:09:52.045 Memory Page Size Minimum: 4096 bytes 00:09:52.045 Memory Page Size Maximum: 65536 bytes 00:09:52.045 Persistent Memory Region: Not Supported 00:09:52.045 Optional Asynchronous Events Supported 00:09:52.045 Namespace Attribute Notices: Supported 00:09:52.045 Firmware Activation Notices: Not Supported 00:09:52.045 ANA Change Notices: Not Supported 00:09:52.045 PLE Aggregate Log Change Notices: Not Supported 00:09:52.045 LBA Status Info Alert Notices: Not Supported 00:09:52.045 EGE Aggregate Log Change Notices: Not Supported 00:09:52.045 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.045 Zone Descriptor Change Notices: Not Supported 00:09:52.045 Discovery Log Change Notices: Not Supported 00:09:52.045 Controller Attributes 00:09:52.045 128-bit Host Identifier: Not Supported 00:09:52.045 Non-Operational Permissive Mode: Not Supported 00:09:52.045 NVM Sets: Not Supported 00:09:52.045 Read Recovery Levels: Not Supported 00:09:52.045 Endurance Groups: Not Supported 00:09:52.045 Predictable Latency Mode: Not Supported 00:09:52.045 Traffic Based Keep ALive: Not Supported 00:09:52.045 Namespace Granularity: Not Supported 00:09:52.045 SQ Associations: Not Supported 00:09:52.045 UUID List: Not Supported 00:09:52.045 Multi-Domain Subsystem: Not Supported 00:09:52.045 Fixed Capacity Management: Not Supported 00:09:52.045 Variable Capacity Management: Not Supported 00:09:52.046 Delete Endurance Group: Not Supported 00:09:52.046 Delete NVM Set: Not Supported 00:09:52.046 Extended LBA Formats Supported: Supported 00:09:52.046 Flexible Data Placement Supported: Not Supported 00:09:52.046 00:09:52.046 Controller Memory Buffer Support 00:09:52.046 ================================ 00:09:52.046 Supported: No 00:09:52.046 00:09:52.046 Persistent Memory Region Support 00:09:52.046 ================================ 00:09:52.046 Supported: No 00:09:52.046 00:09:52.046 Admin Command Set Attributes 00:09:52.046 ============================ 00:09:52.046 Security Send/Receive: Not Supported 00:09:52.046 Format NVM: Supported 00:09:52.046 Firmware Activate/Download: Not Supported 00:09:52.046 Namespace Management: Supported 00:09:52.046 Device Self-Test: Not Supported 00:09:52.046 Directives: Supported 00:09:52.046 NVMe-MI: Not Supported 00:09:52.046 Virtualization Management: Not Supported 00:09:52.046 Doorbell Buffer Config: Supported 00:09:52.046 Get LBA Status Capability: Not Supported 00:09:52.046 Command & Feature Lockdown Capability: Not Supported 00:09:52.046 Abort Command Limit: 4 00:09:52.046 Async Event Request Limit: 4 00:09:52.046 Number of Firmware Slots: N/A 00:09:52.046 Firmware Slot 1 Read-Only: N/A 00:09:52.046 Firmware Activation Without Reset: N/A 00:09:52.046 Multiple Update Detection Support: N/A 00:09:52.046 Firmware Update Granularity: No Information Provided 00:09:52.046 Per-Namespace SMART Log: Yes 00:09:52.046 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.046 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.046 Command Effects Log Page: Supported 00:09:52.046 Get Log Page Extended Data: Supported 00:09:52.046 Telemetry Log Pages: Not Supported 00:09:52.046 Persistent Event Log Pages: Not Supported 00:09:52.046 Supported Log Pages Log Page: May Support 00:09:52.046 Commands Supported & Effects Log Page: Not Supported 00:09:52.046 Feature Identifiers & Effects Log Page:May Support 00:09:52.046 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.046 Data Area 4 for Telemetry Log: Not Supported 00:09:52.046 Error Log Page Entries Supported: 1 00:09:52.046 Keep Alive: Not Supported 00:09:52.046 00:09:52.046 NVM Command Set Attributes 00:09:52.046 ========================== 00:09:52.046 Submission Queue Entry Size 00:09:52.046 Max: 64 00:09:52.046 Min: 64 00:09:52.046 Completion Queue Entry Size 00:09:52.046 Max: 16 00:09:52.046 Min: 16 00:09:52.046 Number of Namespaces: 256 00:09:52.046 Compare Command: Supported 00:09:52.046 Write Uncorrectable Command: Not Supported 00:09:52.046 Dataset Management Command: Supported 00:09:52.046 Write Zeroes Command: Supported 00:09:52.046 Set Features Save Field: Supported 00:09:52.046 Reservations: Not Supported 00:09:52.046 Timestamp: Supported 00:09:52.046 Copy: Supported 00:09:52.046 Volatile Write Cache: Present 00:09:52.046 Atomic Write Unit (Normal): 1 00:09:52.046 Atomic Write Unit (PFail): 1 00:09:52.046 Atomic Compare & Write Unit: 1 00:09:52.046 Fused Compare & Write: Not Supported 00:09:52.046 Scatter-Gather List 00:09:52.046 SGL Command Set: Supported 00:09:52.046 SGL Keyed: Not Supported 00:09:52.046 SGL Bit Bucket Descriptor: Not Supported 00:09:52.046 SGL Metadata Pointer: Not Supported 00:09:52.046 Oversized SGL: Not Supported 00:09:52.046 SGL Metadata Address: Not Supported 00:09:52.046 SGL Offset: Not Supported 00:09:52.046 Transport SGL Data Block: Not Supported 00:09:52.046 Replay Protected Memory Block: Not Supported 00:09:52.046 00:09:52.046 Firmware Slot Information 00:09:52.046 ========================= 00:09:52.046 Active slot: 1 00:09:52.046 Slot 1 Firmware Revision: 1.0 00:09:52.046 00:09:52.046 00:09:52.046 Commands Supported and Effects 00:09:52.046 ============================== 00:09:52.046 Admin Commands 00:09:52.046 -------------- 00:09:52.046 Delete I/O Submission Queue (00h): Supported 00:09:52.046 Create I/O Submission Queue (01h): Supported 00:09:52.046 Get Log Page (02h): Supported 00:09:52.046 Delete I/O Completion Queue (04h): Supported 00:09:52.046 Create I/O Completion Queue (05h): Supported 00:09:52.046 Identify (06h): Supported 00:09:52.046 Abort (08h): Supported 00:09:52.046 Set Features (09h): Supported 00:09:52.046 Get Features (0Ah): Supported 00:09:52.046 Asynchronous Event Request (0Ch): Supported 00:09:52.046 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.046 Directive Send (19h): Supported 00:09:52.046 Directive Receive (1Ah): Supported 00:09:52.046 Virtualization Management (1Ch): Supported 00:09:52.046 Doorbell Buffer Config (7Ch): Supported 00:09:52.046 Format NVM (80h): Supported LBA-Change 00:09:52.046 I/O Commands 00:09:52.046 ------------ 00:09:52.046 Flush (00h): Supported LBA-Change 00:09:52.046 Write (01h): Supported LBA-Change 00:09:52.046 Read (02h): Supported 00:09:52.046 Compare (05h): Supported 00:09:52.046 Write Zeroes (08h): Supported LBA-Change 00:09:52.046 Dataset Management (09h): Supported LBA-Change 00:09:52.046 Unknown (0Ch): Supported 00:09:52.046 Unknown (12h): Supported 00:09:52.046 Copy (19h): Supported LBA-Change 00:09:52.046 Unknown (1Dh): Supported LBA-Change 00:09:52.046 00:09:52.046 Error Log 00:09:52.046 ========= 00:09:52.046 00:09:52.046 Arbitration 00:09:52.046 =========== 00:09:52.046 Arbitration Burst: no limit 00:09:52.046 00:09:52.046 Power Management 00:09:52.046 ================ 00:09:52.046 Number of Power States: 1 00:09:52.046 Current Power State: Power State #0 00:09:52.046 Power State #0: 00:09:52.046 Max Power: 25.00 W 00:09:52.046 Non-Operational State: Operational 00:09:52.046 Entry Latency: 16 microseconds 00:09:52.046 Exit Latency: 4 microseconds 00:09:52.046 Relative Read Throughput: 0 00:09:52.046 Relative Read Latency: 0 00:09:52.046 Relative Write Throughput: 0 00:09:52.046 Relative Write Latency: 0 00:09:52.046 Idle Power: Not Reported 00:09:52.046 Active Power: Not Reported 00:09:52.046 Non-Operational Permissive Mode: Not Supported 00:09:52.046 00:09:52.046 Health Information 00:09:52.046 ================== 00:09:52.046 Critical Warnings: 00:09:52.046 Available Spare Space: OK 00:09:52.046 Temperature: OK 00:09:52.046 Device Reliability: OK 00:09:52.046 Read Only: No 00:09:52.046 Volatile Memory Backup: OK 00:09:52.046 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.046 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.046 Available Spare: 0% 00:09:52.046 Available Spare Threshold: 0% 00:09:52.046 Life Percentage Used: 0% 00:09:52.046 Data Units Read: 1361 00:09:52.046 Data Units Written: 632 00:09:52.046 Host Read Commands: 68004 00:09:52.046 Host Write Commands: 33515 00:09:52.046 Controller Busy Time: 0 minutes 00:09:52.046 Power Cycles: 0 00:09:52.046 Power On Hours: 0 hours 00:09:52.046 Unsafe Shutdowns: 0 00:09:52.046 Unrecoverable Media Errors: 0 00:09:52.046 Lifetime Error Log Entries: 0 00:09:52.046 Warning Temperature Time: 0 minutes 00:09:52.046 Critical Temperature Time: 0 minutes 00:09:52.047 00:09:52.047 Number of Queues 00:09:52.047 ================ 00:09:52.047 Number of I/O Submission Queues: 64 00:09:52.047 Number of I/O Completion Queues: 64 00:09:52.047 00:09:52.047 ZNS Specific Controller Data 00:09:52.047 ============================ 00:09:52.047 Zone Append Size Limit: 0 00:09:52.047 00:09:52.047 00:09:52.047 Active Namespaces 00:09:52.047 ================= 00:09:52.047 Namespace ID:1 00:09:52.047 Error Recovery Timeout: Unlimited 00:09:52.047 Command Set Identifier: [2024-11-19 07:25:01.151292] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63674 terminated unexpected 00:09:52.047 NVM (00h) 00:09:52.047 Deallocate: Supported 00:09:52.047 Deallocated/Unwritten Error: Supported 00:09:52.047 Deallocated Read Value: All 0x00 00:09:52.047 Deallocate in Write Zeroes: Not Supported 00:09:52.047 Deallocated Guard Field: 0xFFFF 00:09:52.047 Flush: Supported 00:09:52.047 Reservation: Not Supported 00:09:52.047 Namespace Sharing Capabilities: Private 00:09:52.047 Size (in LBAs): 1310720 (5GiB) 00:09:52.047 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.047 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.047 Thin Provisioning: Not Supported 00:09:52.047 Per-NS Atomic Units: No 00:09:52.047 Maximum Single Source Range Length: 128 00:09:52.047 Maximum Copy Length: 128 00:09:52.047 Maximum Source Range Count: 128 00:09:52.047 NGUID/EUI64 Never Reused: No 00:09:52.047 Namespace Write Protected: No 00:09:52.047 Number of LBA Formats: 8 00:09:52.047 Current LBA Format: LBA Format #04 00:09:52.047 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.047 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.047 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.047 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.047 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.047 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.047 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.047 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.047 00:09:52.047 ===================================================== 00:09:52.047 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.047 ===================================================== 00:09:52.047 Controller Capabilities/Features 00:09:52.047 ================================ 00:09:52.047 Vendor ID: 1b36 00:09:52.047 Subsystem Vendor ID: 1af4 00:09:52.047 Serial Number: 12343 00:09:52.047 Model Number: QEMU NVMe Ctrl 00:09:52.047 Firmware Version: 8.0.0 00:09:52.047 Recommended Arb Burst: 6 00:09:52.047 IEEE OUI Identifier: 00 54 52 00:09:52.047 Multi-path I/O 00:09:52.047 May have multiple subsystem ports: No 00:09:52.047 May have multiple controllers: Yes 00:09:52.047 Associated with SR-IOV VF: No 00:09:52.047 Max Data Transfer Size: 524288 00:09:52.047 Max Number of Namespaces: 256 00:09:52.047 Max Number of I/O Queues: 64 00:09:52.047 NVMe Specification Version (VS): 1.4 00:09:52.047 NVMe Specification Version (Identify): 1.4 00:09:52.047 Maximum Queue Entries: 2048 00:09:52.047 Contiguous Queues Required: Yes 00:09:52.047 Arbitration Mechanisms Supported 00:09:52.047 Weighted Round Robin: Not Supported 00:09:52.047 Vendor Specific: Not Supported 00:09:52.047 Reset Timeout: 7500 ms 00:09:52.047 Doorbell Stride: 4 bytes 00:09:52.047 NVM Subsystem Reset: Not Supported 00:09:52.047 Command Sets Supported 00:09:52.047 NVM Command Set: Supported 00:09:52.047 Boot Partition: Not Supported 00:09:52.047 Memory Page Size Minimum: 4096 bytes 00:09:52.047 Memory Page Size Maximum: 65536 bytes 00:09:52.047 Persistent Memory Region: Not Supported 00:09:52.047 Optional Asynchronous Events Supported 00:09:52.047 Namespace Attribute Notices: Supported 00:09:52.047 Firmware Activation Notices: Not Supported 00:09:52.047 ANA Change Notices: Not Supported 00:09:52.047 PLE Aggregate Log Change Notices: Not Supported 00:09:52.047 LBA Status Info Alert Notices: Not Supported 00:09:52.047 EGE Aggregate Log Change Notices: Not Supported 00:09:52.047 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.047 Zone Descriptor Change Notices: Not Supported 00:09:52.047 Discovery Log Change Notices: Not Supported 00:09:52.047 Controller Attributes 00:09:52.047 128-bit Host Identifier: Not Supported 00:09:52.047 Non-Operational Permissive Mode: Not Supported 00:09:52.047 NVM Sets: Not Supported 00:09:52.047 Read Recovery Levels: Not Supported 00:09:52.047 Endurance Groups: Supported 00:09:52.047 Predictable Latency Mode: Not Supported 00:09:52.047 Traffic Based Keep ALive: Not Supported 00:09:52.047 Namespace Granularity: Not Supported 00:09:52.047 SQ Associations: Not Supported 00:09:52.047 UUID List: Not Supported 00:09:52.047 Multi-Domain Subsystem: Not Supported 00:09:52.047 Fixed Capacity Management: Not Supported 00:09:52.047 Variable Capacity Management: Not Supported 00:09:52.047 Delete Endurance Group: Not Supported 00:09:52.047 Delete NVM Set: Not Supported 00:09:52.047 Extended LBA Formats Supported: Supported 00:09:52.047 Flexible Data Placement Supported: Supported 00:09:52.047 00:09:52.047 Controller Memory Buffer Support 00:09:52.047 ================================ 00:09:52.047 Supported: No 00:09:52.047 00:09:52.047 Persistent Memory Region Support 00:09:52.047 ================================ 00:09:52.047 Supported: No 00:09:52.047 00:09:52.047 Admin Command Set Attributes 00:09:52.047 ============================ 00:09:52.047 Security Send/Receive: Not Supported 00:09:52.047 Format NVM: Supported 00:09:52.047 Firmware Activate/Download: Not Supported 00:09:52.047 Namespace Management: Supported 00:09:52.047 Device Self-Test: Not Supported 00:09:52.047 Directives: Supported 00:09:52.047 NVMe-MI: Not Supported 00:09:52.047 Virtualization Management: Not Supported 00:09:52.047 Doorbell Buffer Config: Supported 00:09:52.047 Get LBA Status Capability: Not Supported 00:09:52.047 Command & Feature Lockdown Capability: Not Supported 00:09:52.047 Abort Command Limit: 4 00:09:52.047 Async Event Request Limit: 4 00:09:52.047 Number of Firmware Slots: N/A 00:09:52.047 Firmware Slot 1 Read-Only: N/A 00:09:52.047 Firmware Activation Without Reset: N/A 00:09:52.047 Multiple Update Detection Support: N/A 00:09:52.047 Firmware Update Granularity: No Information Provided 00:09:52.047 Per-Namespace SMART Log: Yes 00:09:52.047 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.047 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.047 Command Effects Log Page: Supported 00:09:52.047 Get Log Page Extended Data: Supported 00:09:52.047 Telemetry Log Pages: Not Supported 00:09:52.047 Persistent Event Log Pages: Not Supported 00:09:52.047 Supported Log Pages Log Page: May Support 00:09:52.047 Commands Supported & Effects Log Page: Not Supported 00:09:52.047 Feature Identifiers & Effects Log Page:May Support 00:09:52.047 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.047 Data Area 4 for Telemetry Log: Not Supported 00:09:52.047 Error Log Page Entries Supported: 1 00:09:52.047 Keep Alive: Not Supported 00:09:52.047 00:09:52.047 NVM Command Set Attributes 00:09:52.047 ========================== 00:09:52.047 Submission Queue Entry Size 00:09:52.048 Max: 64 00:09:52.048 Min: 64 00:09:52.048 Completion Queue Entry Size 00:09:52.048 Max: 16 00:09:52.048 Min: 16 00:09:52.048 Number of Namespaces: 256 00:09:52.048 Compare Command: Supported 00:09:52.048 Write Uncorrectable Command: Not Supported 00:09:52.048 Dataset Management Command: Supported 00:09:52.048 Write Zeroes Command: Supported 00:09:52.048 Set Features Save Field: Supported 00:09:52.048 Reservations: Not Supported 00:09:52.048 Timestamp: Supported 00:09:52.048 Copy: Supported 00:09:52.048 Volatile Write Cache: Present 00:09:52.048 Atomic Write Unit (Normal): 1 00:09:52.048 Atomic Write Unit (PFail): 1 00:09:52.048 Atomic Compare & Write Unit: 1 00:09:52.048 Fused Compare & Write: Not Supported 00:09:52.048 Scatter-Gather List 00:09:52.048 SGL Command Set: Supported 00:09:52.048 SGL Keyed: Not Supported 00:09:52.048 SGL Bit Bucket Descriptor: Not Supported 00:09:52.048 SGL Metadata Pointer: Not Supported 00:09:52.048 Oversized SGL: Not Supported 00:09:52.048 SGL Metadata Address: Not Supported 00:09:52.048 SGL Offset: Not Supported 00:09:52.048 Transport SGL Data Block: Not Supported 00:09:52.048 Replay Protected Memory Block: Not Supported 00:09:52.048 00:09:52.048 Firmware Slot Information 00:09:52.048 ========================= 00:09:52.048 Active slot: 1 00:09:52.048 Slot 1 Firmware Revision: 1.0 00:09:52.048 00:09:52.048 00:09:52.048 Commands Supported and Effects 00:09:52.048 ============================== 00:09:52.048 Admin Commands 00:09:52.048 -------------- 00:09:52.048 Delete I/O Submission Queue (00h): Supported 00:09:52.048 Create I/O Submission Queue (01h): Supported 00:09:52.048 Get Log Page (02h): Supported 00:09:52.048 Delete I/O Completion Queue (04h): Supported 00:09:52.048 Create I/O Completion Queue (05h): Supported 00:09:52.048 Identify (06h): Supported 00:09:52.048 Abort (08h): Supported 00:09:52.048 Set Features (09h): Supported 00:09:52.048 Get Features (0Ah): Supported 00:09:52.048 Asynchronous Event Request (0Ch): Supported 00:09:52.048 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.048 Directive Send (19h): Supported 00:09:52.048 Directive Receive (1Ah): Supported 00:09:52.048 Virtualization Management (1Ch): Supported 00:09:52.048 Doorbell Buffer Config (7Ch): Supported 00:09:52.048 Format NVM (80h): Supported LBA-Change 00:09:52.048 I/O Commands 00:09:52.048 ------------ 00:09:52.048 Flush (00h): Supported LBA-Change 00:09:52.048 Write (01h): Supported LBA-Change 00:09:52.048 Read (02h): Supported 00:09:52.048 Compare (05h): Supported 00:09:52.048 Write Zeroes (08h): Supported LBA-Change 00:09:52.048 Dataset Management (09h): Supported LBA-Change 00:09:52.048 Unknown (0Ch): Supported 00:09:52.048 Unknown (12h): Supported 00:09:52.048 Copy (19h): Supported LBA-Change 00:09:52.048 Unknown (1Dh): Supported LBA-Change 00:09:52.048 00:09:52.048 Error Log 00:09:52.048 ========= 00:09:52.048 00:09:52.048 Arbitration 00:09:52.048 =========== 00:09:52.048 Arbitration Burst: no limit 00:09:52.048 00:09:52.048 Power Management 00:09:52.048 ================ 00:09:52.048 Number of Power States: 1 00:09:52.048 Current Power State: Power State #0 00:09:52.048 Power State #0: 00:09:52.048 Max Power: 25.00 W 00:09:52.048 Non-Operational State: Operational 00:09:52.048 Entry Latency: 16 microseconds 00:09:52.048 Exit Latency: 4 microseconds 00:09:52.048 Relative Read Throughput: 0 00:09:52.048 Relative Read Latency: 0 00:09:52.048 Relative Write Throughput: 0 00:09:52.048 Relative Write Latency: 0 00:09:52.048 Idle Power: Not Reported 00:09:52.048 Active Power: Not Reported 00:09:52.048 Non-Operational Permissive Mode: Not Supported 00:09:52.048 00:09:52.048 Health Information 00:09:52.048 ================== 00:09:52.048 Critical Warnings: 00:09:52.048 Available Spare Space: OK 00:09:52.048 Temperature: OK 00:09:52.048 Device Reliability: OK 00:09:52.048 Read Only: No 00:09:52.048 Volatile Memory Backup: OK 00:09:52.048 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.048 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.048 Available Spare: 0% 00:09:52.048 Available Spare Threshold: 0% 00:09:52.048 Life Percentage Used: 0% 00:09:52.048 Data Units Read: 1589 00:09:52.048 Data Units Written: 743 00:09:52.048 Host Read Commands: 69819 00:09:52.048 Host Write Commands: 34411 00:09:52.048 Controller Busy Time: 0 minutes 00:09:52.048 Power Cycles: 0 00:09:52.048 Power On Hours: 0 hours 00:09:52.048 Unsafe Shutdowns: 0 00:09:52.048 Unrecoverable Media Errors: 0 00:09:52.048 Lifetime Error Log Entries: 0 00:09:52.048 Warning Temperature Time: 0 minutes 00:09:52.048 Critical Temperature Time: 0 minutes 00:09:52.048 00:09:52.048 Number of Queues 00:09:52.048 ================ 00:09:52.048 Number of I/O Submission Queues: 64 00:09:52.048 Number of I/O Completion Queues: 64 00:09:52.048 00:09:52.048 ZNS Specific Controller Data 00:09:52.048 ============================ 00:09:52.048 Zone Append Size Limit: 0 00:09:52.048 00:09:52.048 00:09:52.048 Active Namespaces 00:09:52.048 ================= 00:09:52.048 Namespace ID:1 00:09:52.048 Error Recovery Timeout: Unlimited 00:09:52.048 Command Set Identifier: NVM (00h) 00:09:52.048 Deallocate: Supported 00:09:52.048 Deallocated/Unwritten Error: Supported 00:09:52.048 Deallocated Read Value: All 0x00 00:09:52.048 Deallocate in Write Zeroes: Not Supported 00:09:52.048 Deallocated Guard Field: 0xFFFF 00:09:52.048 Flush: Supported 00:09:52.048 Reservation: Not Supported 00:09:52.048 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.048 Size (in LBAs): 262144 (1GiB) 00:09:52.048 Capacity (in LBAs): 262144 (1GiB) 00:09:52.048 Utilization (in LBAs): 262144 (1GiB) 00:09:52.048 Thin Provisioning: Not Supported 00:09:52.048 Per-NS Atomic Units: No 00:09:52.048 Maximum Single Source Range Length: 128 00:09:52.048 Maximum Copy Length: 128 00:09:52.048 Maximum Source Range Count: 128 00:09:52.048 NGUID/EUI64 Never Reused: No 00:09:52.048 Namespace Write Protected: No 00:09:52.048 Endurance group ID: 1 00:09:52.048 Number of LBA Formats: 8 00:09:52.048 Current LBA Format: LBA Format #04 00:09:52.048 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.048 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.048 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.048 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.048 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.048 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.048 LBA Format #06: Data Si[2024-11-19 07:25:01.152406] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63674 terminated unexpected 00:09:52.048 ze: 4096 Metadata Size: 16 00:09:52.048 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.048 00:09:52.048 Get Feature FDP: 00:09:52.048 ================ 00:09:52.049 Enabled: Yes 00:09:52.049 FDP configuration index: 0 00:09:52.049 00:09:52.049 FDP configurations log page 00:09:52.049 =========================== 00:09:52.049 Number of FDP configurations: 1 00:09:52.049 Version: 0 00:09:52.049 Size: 112 00:09:52.049 FDP Configuration Descriptor: 0 00:09:52.049 Descriptor Size: 96 00:09:52.049 Reclaim Group Identifier format: 2 00:09:52.049 FDP Volatile Write Cache: Not Present 00:09:52.049 FDP Configuration: Valid 00:09:52.049 Vendor Specific Size: 0 00:09:52.049 Number of Reclaim Groups: 2 00:09:52.049 Number of Recalim Unit Handles: 8 00:09:52.049 Max Placement Identifiers: 128 00:09:52.049 Number of Namespaces Suppprted: 256 00:09:52.049 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.049 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.049 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.049 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.049 00:09:52.049 FDP reclaim unit handle usage log page 00:09:52.049 ====================================== 00:09:52.049 Number of Reclaim Unit Handles: 8 00:09:52.049 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.049 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.049 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.049 00:09:52.049 FDP statistics log page 00:09:52.049 ======================= 00:09:52.049 Host bytes with metadata written: 487010304 00:09:52.049 Media bytes with metadata written: 487198720 00:09:52.049 Media bytes erased: 0 00:09:52.049 00:09:52.049 FDP events log page 00:09:52.049 =================== 00:09:52.049 Number of FDP events: 0 00:09:52.049 00:09:52.049 ===================================================== 00:09:52.049 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.049 ===================================================== 00:09:52.049 Controller Capabilities/Features 00:09:52.049 ================================ 00:09:52.049 Vendor ID: 1b36 00:09:52.049 Subsystem Vendor ID: 1af4 00:09:52.049 Serial Number: 12342 00:09:52.049 Model Number: QEMU NVMe Ctrl 00:09:52.049 Firmware Version: 8.0.0 00:09:52.049 Recommended Arb Burst: 6 00:09:52.049 IEEE OUI Identifier: 00 54 52 00:09:52.049 Multi-path I/O 00:09:52.049 May have multiple subsystem ports: No 00:09:52.049 May have multiple controllers: No 00:09:52.049 Associated with SR-IOV VF: No 00:09:52.049 Max Data Transfer Size: 524288 00:09:52.049 Max Number of Namespaces: 256 00:09:52.049 Max Number of I/O Queues: 64 00:09:52.049 NVMe Specification Version (VS): 1.4 00:09:52.049 NVMe Specification Version (Identify): 1.4 00:09:52.049 Maximum Queue Entries: 2048 00:09:52.049 Contiguous Queues Required: Yes 00:09:52.049 Arbitration Mechanisms Supported 00:09:52.049 Weighted Round Robin: Not Supported 00:09:52.049 Vendor Specific: Not Supported 00:09:52.049 Reset Timeout: 7500 ms 00:09:52.049 Doorbell Stride: 4 bytes 00:09:52.049 NVM Subsystem Reset: Not Supported 00:09:52.049 Command Sets Supported 00:09:52.049 NVM Command Set: Supported 00:09:52.049 Boot Partition: Not Supported 00:09:52.049 Memory Page Size Minimum: 4096 bytes 00:09:52.049 Memory Page Size Maximum: 65536 bytes 00:09:52.049 Persistent Memory Region: Not Supported 00:09:52.049 Optional Asynchronous Events Supported 00:09:52.049 Namespace Attribute Notices: Supported 00:09:52.049 Firmware Activation Notices: Not Supported 00:09:52.049 ANA Change Notices: Not Supported 00:09:52.049 PLE Aggregate Log Change Notices: Not Supported 00:09:52.049 LBA Status Info Alert Notices: Not Supported 00:09:52.049 EGE Aggregate Log Change Notices: Not Supported 00:09:52.049 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.049 Zone Descriptor Change Notices: Not Supported 00:09:52.049 Discovery Log Change Notices: Not Supported 00:09:52.049 Controller Attributes 00:09:52.049 128-bit Host Identifier: Not Supported 00:09:52.049 Non-Operational Permissive Mode: Not Supported 00:09:52.049 NVM Sets: Not Supported 00:09:52.049 Read Recovery Levels: Not Supported 00:09:52.049 Endurance Groups: Not Supported 00:09:52.049 Predictable Latency Mode: Not Supported 00:09:52.049 Traffic Based Keep ALive: Not Supported 00:09:52.049 Namespace Granularity: Not Supported 00:09:52.049 SQ Associations: Not Supported 00:09:52.049 UUID List: Not Supported 00:09:52.049 Multi-Domain Subsystem: Not Supported 00:09:52.049 Fixed Capacity Management: Not Supported 00:09:52.049 Variable Capacity Management: Not Supported 00:09:52.049 Delete Endurance Group: Not Supported 00:09:52.049 Delete NVM Set: Not Supported 00:09:52.049 Extended LBA Formats Supported: Supported 00:09:52.049 Flexible Data Placement Supported: Not Supported 00:09:52.049 00:09:52.049 Controller Memory Buffer Support 00:09:52.049 ================================ 00:09:52.049 Supported: No 00:09:52.049 00:09:52.049 Persistent Memory Region Support 00:09:52.049 ================================ 00:09:52.049 Supported: No 00:09:52.049 00:09:52.049 Admin Command Set Attributes 00:09:52.049 ============================ 00:09:52.049 Security Send/Receive: Not Supported 00:09:52.049 Format NVM: Supported 00:09:52.049 Firmware Activate/Download: Not Supported 00:09:52.049 Namespace Management: Supported 00:09:52.049 Device Self-Test: Not Supported 00:09:52.049 Directives: Supported 00:09:52.049 NVMe-MI: Not Supported 00:09:52.049 Virtualization Management: Not Supported 00:09:52.049 Doorbell Buffer Config: Supported 00:09:52.049 Get LBA Status Capability: Not Supported 00:09:52.049 Command & Feature Lockdown Capability: Not Supported 00:09:52.049 Abort Command Limit: 4 00:09:52.049 Async Event Request Limit: 4 00:09:52.049 Number of Firmware Slots: N/A 00:09:52.049 Firmware Slot 1 Read-Only: N/A 00:09:52.050 Firmware Activation Without Reset: N/A 00:09:52.050 Multiple Update Detection Support: N/A 00:09:52.050 Firmware Update Granularity: No Information Provided 00:09:52.050 Per-Namespace SMART Log: Yes 00:09:52.050 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.050 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:52.050 Command Effects Log Page: Supported 00:09:52.050 Get Log Page Extended Data: Supported 00:09:52.050 Telemetry Log Pages: Not Supported 00:09:52.050 Persistent Event Log Pages: Not Supported 00:09:52.050 Supported Log Pages Log Page: May Support 00:09:52.050 Commands Supported & Effects Log Page: Not Supported 00:09:52.050 Feature Identifiers & Effects Log Page:May Support 00:09:52.050 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.050 Data Area 4 for Telemetry Log: Not Supported 00:09:52.050 Error Log Page Entries Supported: 1 00:09:52.050 Keep Alive: Not Supported 00:09:52.050 00:09:52.050 NVM Command Set Attributes 00:09:52.050 ========================== 00:09:52.050 Submission Queue Entry Size 00:09:52.050 Max: 64 00:09:52.050 Min: 64 00:09:52.050 Completion Queue Entry Size 00:09:52.050 Max: 16 00:09:52.050 Min: 16 00:09:52.050 Number of Namespaces: 256 00:09:52.050 Compare Command: Supported 00:09:52.050 Write Uncorrectable Command: Not Supported 00:09:52.050 Dataset Management Command: Supported 00:09:52.050 Write Zeroes Command: Supported 00:09:52.050 Set Features Save Field: Supported 00:09:52.050 Reservations: Not Supported 00:09:52.050 Timestamp: Supported 00:09:52.050 Copy: Supported 00:09:52.050 Volatile Write Cache: Present 00:09:52.050 Atomic Write Unit (Normal): 1 00:09:52.050 Atomic Write Unit (PFail): 1 00:09:52.050 Atomic Compare & Write Unit: 1 00:09:52.050 Fused Compare & Write: Not Supported 00:09:52.050 Scatter-Gather List 00:09:52.050 SGL Command Set: Supported 00:09:52.050 SGL Keyed: Not Supported 00:09:52.050 SGL Bit Bucket Descriptor: Not Supported 00:09:52.050 SGL Metadata Pointer: Not Supported 00:09:52.050 Oversized SGL: Not Supported 00:09:52.050 SGL Metadata Address: Not Supported 00:09:52.050 SGL Offset: Not Supported 00:09:52.050 Transport SGL Data Block: Not Supported 00:09:52.050 Replay Protected Memory Block: Not Supported 00:09:52.050 00:09:52.050 Firmware Slot Information 00:09:52.050 ========================= 00:09:52.050 Active slot: 1 00:09:52.050 Slot 1 Firmware Revision: 1.0 00:09:52.050 00:09:52.050 00:09:52.050 Commands Supported and Effects 00:09:52.050 ============================== 00:09:52.050 Admin Commands 00:09:52.050 -------------- 00:09:52.050 Delete I/O Submission Queue (00h): Supported 00:09:52.050 Create I/O Submission Queue (01h): Supported 00:09:52.050 Get Log Page (02h): Supported 00:09:52.050 Delete I/O Completion Queue (04h): Supported 00:09:52.050 Create I/O Completion Queue (05h): Supported 00:09:52.050 Identify (06h): Supported 00:09:52.050 Abort (08h): Supported 00:09:52.050 Set Features (09h): Supported 00:09:52.050 Get Features (0Ah): Supported 00:09:52.050 Asynchronous Event Request (0Ch): Supported 00:09:52.050 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.050 Directive Send (19h): Supported 00:09:52.050 Directive Receive (1Ah): Supported 00:09:52.050 Virtualization Management (1Ch): Supported 00:09:52.050 Doorbell Buffer Config (7Ch): Supported 00:09:52.050 Format NVM (80h): Supported LBA-Change 00:09:52.050 I/O Commands 00:09:52.050 ------------ 00:09:52.050 Flush (00h): Supported LBA-Change 00:09:52.050 Write (01h): Supported LBA-Change 00:09:52.050 Read (02h): Supported 00:09:52.050 Compare (05h): Supported 00:09:52.050 Write Zeroes (08h): Supported LBA-Change 00:09:52.050 Dataset Management (09h): Supported LBA-Change 00:09:52.050 Unknown (0Ch): Supported 00:09:52.050 Unknown (12h): Supported 00:09:52.050 Copy (19h): Supported LBA-Change 00:09:52.050 Unknown (1Dh): Supported LBA-Change 00:09:52.050 00:09:52.050 Error Log 00:09:52.050 ========= 00:09:52.050 00:09:52.050 Arbitration 00:09:52.050 =========== 00:09:52.050 Arbitration Burst: no limit 00:09:52.050 00:09:52.050 Power Management 00:09:52.050 ================ 00:09:52.050 Number of Power States: 1 00:09:52.050 Current Power State: Power State #0 00:09:52.050 Power State #0: 00:09:52.050 Max Power: 25.00 W 00:09:52.050 Non-Operational State: Operational 00:09:52.050 Entry Latency: 16 microseconds 00:09:52.050 Exit Latency: 4 microseconds 00:09:52.050 Relative Read Throughput: 0 00:09:52.050 Relative Read Latency: 0 00:09:52.050 Relative Write Throughput: 0 00:09:52.050 Relative Write Latency: 0 00:09:52.050 Idle Power: Not Reported 00:09:52.050 Active Power: Not Reported 00:09:52.050 Non-Operational Permissive Mode: Not Supported 00:09:52.050 00:09:52.050 Health Information 00:09:52.050 ================== 00:09:52.050 Critical Warnings: 00:09:52.050 Available Spare Space: OK 00:09:52.050 Temperature: OK 00:09:52.050 Device Reliability: OK 00:09:52.050 Read Only: No 00:09:52.050 Volatile Memory Backup: OK 00:09:52.050 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.050 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.050 Available Spare: 0% 00:09:52.050 Available Spare Threshold: 0% 00:09:52.050 Life Percentage Used: 0% 00:09:52.050 Data Units Read: 4271 00:09:52.050 Data Units Written: 1978 00:09:52.050 Host Read Commands: 205806 00:09:52.050 Host Write Commands: 101206 00:09:52.050 Controller Busy Time: 0 minutes 00:09:52.050 Power Cycles: 0 00:09:52.050 Power On Hours: 0 hours 00:09:52.050 Unsafe Shutdowns: 0 00:09:52.050 Unrecoverable Media Errors: 0 00:09:52.050 Lifetime Error Log Entries: 0 00:09:52.050 Warning Temperature Time: 0 minutes 00:09:52.050 Critical Temperature Time: 0 minutes 00:09:52.050 00:09:52.050 Number of Queues 00:09:52.050 ================ 00:09:52.050 Number of I/O Submission Queues: 64 00:09:52.050 Number of I/O Completion Queues: 64 00:09:52.050 00:09:52.050 ZNS Specific Controller Data 00:09:52.050 ============================ 00:09:52.050 Zone Append Size Limit: 0 00:09:52.050 00:09:52.051 00:09:52.051 Active Namespaces 00:09:52.051 ================= 00:09:52.051 Namespace ID:1 00:09:52.051 Error Recovery Timeout: Unlimited 00:09:52.051 Command Set Identifier: NVM (00h) 00:09:52.051 Deallocate: Supported 00:09:52.051 Deallocated/Unwritten Error: Supported 00:09:52.051 Deallocated Read Value: All 0x00 00:09:52.051 Deallocate in Write Zeroes: Not Supported 00:09:52.051 Deallocated Guard Field: 0xFFFF 00:09:52.051 Flush: Supported 00:09:52.051 Reservation: Not Supported 00:09:52.051 Namespace Sharing Capabilities: Private 00:09:52.051 Size (in LBAs): 1048576 (4GiB) 00:09:52.051 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.051 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.051 Thin Provisioning: Not Supported 00:09:52.051 Per-NS Atomic Units: No 00:09:52.051 Maximum Single Source Range Length: 128 00:09:52.051 Maximum Copy Length: 128 00:09:52.051 Maximum Source Range Count: 128 00:09:52.051 NGUID/EUI64 Never Reused: No 00:09:52.051 Namespace Write Protected: No 00:09:52.051 Number of LBA Formats: 8 00:09:52.051 Current LBA Format: LBA Format #04 00:09:52.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.051 00:09:52.051 Namespace ID:2 00:09:52.051 Error Recovery Timeout: Unlimited 00:09:52.051 Command Set Identifier: NVM (00h) 00:09:52.051 Deallocate: Supported 00:09:52.051 Deallocated/Unwritten Error: Supported 00:09:52.051 Deallocated Read Value: All 0x00 00:09:52.051 Deallocate in Write Zeroes: Not Supported 00:09:52.051 Deallocated Guard Field: 0xFFFF 00:09:52.051 Flush: Supported 00:09:52.051 Reservation: Not Supported 00:09:52.051 Namespace Sharing Capabilities: Private 00:09:52.051 Size (in LBAs): 1048576 (4GiB) 00:09:52.051 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.051 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.051 Thin Provisioning: Not Supported 00:09:52.051 Per-NS Atomic Units: No 00:09:52.051 Maximum Single Source Range Length: 128 00:09:52.051 Maximum Copy Length: 128 00:09:52.051 Maximum Source Range Count: 128 00:09:52.051 NGUID/EUI64 Never Reused: No 00:09:52.051 Namespace Write Protected: No 00:09:52.051 Number of LBA Formats: 8 00:09:52.051 Current LBA Format: LBA Format #04 00:09:52.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.051 00:09:52.051 Namespace ID:3 00:09:52.051 Error Recovery Timeout: Unlimited 00:09:52.051 Command Set Identifier: NVM (00h) 00:09:52.051 Deallocate: Supported 00:09:52.051 Deallocated/Unwritten Error: Supported 00:09:52.051 Deallocated Read Value: All 0x00 00:09:52.051 Deallocate in Write Zeroes: Not Supported 00:09:52.051 Deallocated Guard Field: 0xFFFF 00:09:52.051 Flush: Supported 00:09:52.051 Reservation: Not Supported 00:09:52.051 Namespace Sharing Capabilities: Private 00:09:52.051 Size (in LBAs): 1048576 (4GiB) 00:09:52.051 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.051 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.051 Thin Provisioning: Not Supported 00:09:52.051 Per-NS Atomic Units: No 00:09:52.051 Maximum Single Source Range Length: 128 00:09:52.051 Maximum Copy Length: 128 00:09:52.051 Maximum Source Range Count: 128 00:09:52.051 NGUID/EUI64 Never Reused: No 00:09:52.051 Namespace Write Protected: No 00:09:52.051 Number of LBA Formats: 8 00:09:52.051 Current LBA Format: LBA Format #04 00:09:52.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.052 00:09:52.052 07:25:01 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.052 07:25:01 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:52.311 ===================================================== 00:09:52.311 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.311 ===================================================== 00:09:52.311 Controller Capabilities/Features 00:09:52.311 ================================ 00:09:52.311 Vendor ID: 1b36 00:09:52.311 Subsystem Vendor ID: 1af4 00:09:52.311 Serial Number: 12340 00:09:52.311 Model Number: QEMU NVMe Ctrl 00:09:52.311 Firmware Version: 8.0.0 00:09:52.311 Recommended Arb Burst: 6 00:09:52.311 IEEE OUI Identifier: 00 54 52 00:09:52.311 Multi-path I/O 00:09:52.311 May have multiple subsystem ports: No 00:09:52.311 May have multiple controllers: No 00:09:52.311 Associated with SR-IOV VF: No 00:09:52.311 Max Data Transfer Size: 524288 00:09:52.311 Max Number of Namespaces: 256 00:09:52.311 Max Number of I/O Queues: 64 00:09:52.311 NVMe Specification Version (VS): 1.4 00:09:52.311 NVMe Specification Version (Identify): 1.4 00:09:52.312 Maximum Queue Entries: 2048 00:09:52.312 Contiguous Queues Required: Yes 00:09:52.312 Arbitration Mechanisms Supported 00:09:52.312 Weighted Round Robin: Not Supported 00:09:52.312 Vendor Specific: Not Supported 00:09:52.312 Reset Timeout: 7500 ms 00:09:52.312 Doorbell Stride: 4 bytes 00:09:52.312 NVM Subsystem Reset: Not Supported 00:09:52.312 Command Sets Supported 00:09:52.312 NVM Command Set: Supported 00:09:52.312 Boot Partition: Not Supported 00:09:52.312 Memory Page Size Minimum: 4096 bytes 00:09:52.312 Memory Page Size Maximum: 65536 bytes 00:09:52.312 Persistent Memory Region: Not Supported 00:09:52.312 Optional Asynchronous Events Supported 00:09:52.312 Namespace Attribute Notices: Supported 00:09:52.312 Firmware Activation Notices: Not Supported 00:09:52.312 ANA Change Notices: Not Supported 00:09:52.312 PLE Aggregate Log Change Notices: Not Supported 00:09:52.312 LBA Status Info Alert Notices: Not Supported 00:09:52.312 EGE Aggregate Log Change Notices: Not Supported 00:09:52.312 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.312 Zone Descriptor Change Notices: Not Supported 00:09:52.312 Discovery Log Change Notices: Not Supported 00:09:52.312 Controller Attributes 00:09:52.312 128-bit Host Identifier: Not Supported 00:09:52.312 Non-Operational Permissive Mode: Not Supported 00:09:52.312 NVM Sets: Not Supported 00:09:52.312 Read Recovery Levels: Not Supported 00:09:52.312 Endurance Groups: Not Supported 00:09:52.312 Predictable Latency Mode: Not Supported 00:09:52.312 Traffic Based Keep ALive: Not Supported 00:09:52.312 Namespace Granularity: Not Supported 00:09:52.312 SQ Associations: Not Supported 00:09:52.312 UUID List: Not Supported 00:09:52.312 Multi-Domain Subsystem: Not Supported 00:09:52.312 Fixed Capacity Management: Not Supported 00:09:52.312 Variable Capacity Management: Not Supported 00:09:52.312 Delete Endurance Group: Not Supported 00:09:52.312 Delete NVM Set: Not Supported 00:09:52.312 Extended LBA Formats Supported: Supported 00:09:52.312 Flexible Data Placement Supported: Not Supported 00:09:52.312 00:09:52.312 Controller Memory Buffer Support 00:09:52.312 ================================ 00:09:52.312 Supported: No 00:09:52.312 00:09:52.312 Persistent Memory Region Support 00:09:52.312 ================================ 00:09:52.312 Supported: No 00:09:52.312 00:09:52.312 Admin Command Set Attributes 00:09:52.312 ============================ 00:09:52.312 Security Send/Receive: Not Supported 00:09:52.312 Format NVM: Supported 00:09:52.312 Firmware Activate/Download: Not Supported 00:09:52.312 Namespace Management: Supported 00:09:52.312 Device Self-Test: Not Supported 00:09:52.312 Directives: Supported 00:09:52.312 NVMe-MI: Not Supported 00:09:52.312 Virtualization Management: Not Supported 00:09:52.312 Doorbell Buffer Config: Supported 00:09:52.312 Get LBA Status Capability: Not Supported 00:09:52.312 Command & Feature Lockdown Capability: Not Supported 00:09:52.312 Abort Command Limit: 4 00:09:52.312 Async Event Request Limit: 4 00:09:52.312 Number of Firmware Slots: N/A 00:09:52.312 Firmware Slot 1 Read-Only: N/A 00:09:52.312 Firmware Activation Without Reset: N/A 00:09:52.312 Multiple Update Detection Support: N/A 00:09:52.312 Firmware Update Granularity: No Information Provided 00:09:52.312 Per-Namespace SMART Log: Yes 00:09:52.312 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.312 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.312 Command Effects Log Page: Supported 00:09:52.312 Get Log Page Extended Data: Supported 00:09:52.312 Telemetry Log Pages: Not Supported 00:09:52.312 Persistent Event Log Pages: Not Supported 00:09:52.312 Supported Log Pages Log Page: May Support 00:09:52.312 Commands Supported & Effects Log Page: Not Supported 00:09:52.312 Feature Identifiers & Effects Log Page:May Support 00:09:52.312 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.312 Data Area 4 for Telemetry Log: Not Supported 00:09:52.312 Error Log Page Entries Supported: 1 00:09:52.312 Keep Alive: Not Supported 00:09:52.312 00:09:52.312 NVM Command Set Attributes 00:09:52.312 ========================== 00:09:52.312 Submission Queue Entry Size 00:09:52.312 Max: 64 00:09:52.312 Min: 64 00:09:52.312 Completion Queue Entry Size 00:09:52.312 Max: 16 00:09:52.312 Min: 16 00:09:52.312 Number of Namespaces: 256 00:09:52.312 Compare Command: Supported 00:09:52.312 Write Uncorrectable Command: Not Supported 00:09:52.312 Dataset Management Command: Supported 00:09:52.312 Write Zeroes Command: Supported 00:09:52.312 Set Features Save Field: Supported 00:09:52.312 Reservations: Not Supported 00:09:52.312 Timestamp: Supported 00:09:52.312 Copy: Supported 00:09:52.312 Volatile Write Cache: Present 00:09:52.312 Atomic Write Unit (Normal): 1 00:09:52.312 Atomic Write Unit (PFail): 1 00:09:52.312 Atomic Compare & Write Unit: 1 00:09:52.312 Fused Compare & Write: Not Supported 00:09:52.312 Scatter-Gather List 00:09:52.312 SGL Command Set: Supported 00:09:52.312 SGL Keyed: Not Supported 00:09:52.313 SGL Bit Bucket Descriptor: Not Supported 00:09:52.313 SGL Metadata Pointer: Not Supported 00:09:52.313 Oversized SGL: Not Supported 00:09:52.313 SGL Metadata Address: Not Supported 00:09:52.313 SGL Offset: Not Supported 00:09:52.313 Transport SGL Data Block: Not Supported 00:09:52.313 Replay Protected Memory Block: Not Supported 00:09:52.313 00:09:52.313 Firmware Slot Information 00:09:52.313 ========================= 00:09:52.313 Active slot: 1 00:09:52.313 Slot 1 Firmware Revision: 1.0 00:09:52.313 00:09:52.313 00:09:52.313 Commands Supported and Effects 00:09:52.313 ============================== 00:09:52.313 Admin Commands 00:09:52.313 -------------- 00:09:52.313 Delete I/O Submission Queue (00h): Supported 00:09:52.313 Create I/O Submission Queue (01h): Supported 00:09:52.313 Get Log Page (02h): Supported 00:09:52.313 Delete I/O Completion Queue (04h): Supported 00:09:52.313 Create I/O Completion Queue (05h): Supported 00:09:52.313 Identify (06h): Supported 00:09:52.313 Abort (08h): Supported 00:09:52.313 Set Features (09h): Supported 00:09:52.313 Get Features (0Ah): Supported 00:09:52.313 Asynchronous Event Request (0Ch): Supported 00:09:52.313 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.313 Directive Send (19h): Supported 00:09:52.313 Directive Receive (1Ah): Supported 00:09:52.313 Virtualization Management (1Ch): Supported 00:09:52.313 Doorbell Buffer Config (7Ch): Supported 00:09:52.313 Format NVM (80h): Supported LBA-Change 00:09:52.313 I/O Commands 00:09:52.313 ------------ 00:09:52.313 Flush (00h): Supported LBA-Change 00:09:52.313 Write (01h): Supported LBA-Change 00:09:52.313 Read (02h): Supported 00:09:52.313 Compare (05h): Supported 00:09:52.313 Write Zeroes (08h): Supported LBA-Change 00:09:52.313 Dataset Management (09h): Supported LBA-Change 00:09:52.313 Unknown (0Ch): Supported 00:09:52.313 Unknown (12h): Supported 00:09:52.313 Copy (19h): Supported LBA-Change 00:09:52.313 Unknown (1Dh): Supported LBA-Change 00:09:52.313 00:09:52.313 Error Log 00:09:52.313 ========= 00:09:52.313 00:09:52.313 Arbitration 00:09:52.313 =========== 00:09:52.313 Arbitration Burst: no limit 00:09:52.313 00:09:52.313 Power Management 00:09:52.313 ================ 00:09:52.313 Number of Power States: 1 00:09:52.313 Current Power State: Power State #0 00:09:52.313 Power State #0: 00:09:52.313 Max Power: 25.00 W 00:09:52.313 Non-Operational State: Operational 00:09:52.313 Entry Latency: 16 microseconds 00:09:52.313 Exit Latency: 4 microseconds 00:09:52.313 Relative Read Throughput: 0 00:09:52.313 Relative Read Latency: 0 00:09:52.313 Relative Write Throughput: 0 00:09:52.313 Relative Write Latency: 0 00:09:52.313 Idle Power: Not Reported 00:09:52.313 Active Power: Not Reported 00:09:52.313 Non-Operational Permissive Mode: Not Supported 00:09:52.313 00:09:52.313 Health Information 00:09:52.313 ================== 00:09:52.313 Critical Warnings: 00:09:52.313 Available Spare Space: OK 00:09:52.313 Temperature: OK 00:09:52.313 Device Reliability: OK 00:09:52.313 Read Only: No 00:09:52.313 Volatile Memory Backup: OK 00:09:52.313 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.313 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.313 Available Spare: 0% 00:09:52.313 Available Spare Threshold: 0% 00:09:52.313 Life Percentage Used: 0% 00:09:52.313 Data Units Read: 1912 00:09:52.313 Data Units Written: 885 00:09:52.313 Host Read Commands: 94329 00:09:52.313 Host Write Commands: 46879 00:09:52.313 Controller Busy Time: 0 minutes 00:09:52.313 Power Cycles: 0 00:09:52.313 Power On Hours: 0 hours 00:09:52.313 Unsafe Shutdowns: 0 00:09:52.313 Unrecoverable Media Errors: 0 00:09:52.313 Lifetime Error Log Entries: 0 00:09:52.313 Warning Temperature Time: 0 minutes 00:09:52.313 Critical Temperature Time: 0 minutes 00:09:52.313 00:09:52.313 Number of Queues 00:09:52.313 ================ 00:09:52.313 Number of I/O Submission Queues: 64 00:09:52.313 Number of I/O Completion Queues: 64 00:09:52.313 00:09:52.313 ZNS Specific Controller Data 00:09:52.313 ============================ 00:09:52.313 Zone Append Size Limit: 0 00:09:52.313 00:09:52.313 00:09:52.313 Active Namespaces 00:09:52.313 ================= 00:09:52.313 Namespace ID:1 00:09:52.313 Error Recovery Timeout: Unlimited 00:09:52.313 Command Set Identifier: NVM (00h) 00:09:52.313 Deallocate: Supported 00:09:52.313 Deallocated/Unwritten Error: Supported 00:09:52.313 Deallocated Read Value: All 0x00 00:09:52.313 Deallocate in Write Zeroes: Not Supported 00:09:52.313 Deallocated Guard Field: 0xFFFF 00:09:52.313 Flush: Supported 00:09:52.313 Reservation: Not Supported 00:09:52.313 Metadata Transferred as: Separate Metadata Buffer 00:09:52.313 Namespace Sharing Capabilities: Private 00:09:52.314 Size (in LBAs): 1548666 (5GiB) 00:09:52.314 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.314 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.314 Thin Provisioning: Not Supported 00:09:52.314 Per-NS Atomic Units: No 00:09:52.314 Maximum Single Source Range Length: 128 00:09:52.314 Maximum Copy Length: 128 00:09:52.314 Maximum Source Range Count: 128 00:09:52.314 NGUID/EUI64 Never Reused: No 00:09:52.314 Namespace Write Protected: No 00:09:52.314 Number of LBA Formats: 8 00:09:52.314 Current LBA Format: LBA Format #07 00:09:52.314 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.314 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.314 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.314 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.314 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.314 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.314 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.314 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.314 00:09:52.314 07:25:01 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.314 07:25:01 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:52.314 ===================================================== 00:09:52.314 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.314 ===================================================== 00:09:52.314 Controller Capabilities/Features 00:09:52.314 ================================ 00:09:52.314 Vendor ID: 1b36 00:09:52.314 Subsystem Vendor ID: 1af4 00:09:52.314 Serial Number: 12341 00:09:52.314 Model Number: QEMU NVMe Ctrl 00:09:52.314 Firmware Version: 8.0.0 00:09:52.314 Recommended Arb Burst: 6 00:09:52.314 IEEE OUI Identifier: 00 54 52 00:09:52.314 Multi-path I/O 00:09:52.314 May have multiple subsystem ports: No 00:09:52.314 May have multiple controllers: No 00:09:52.314 Associated with SR-IOV VF: No 00:09:52.314 Max Data Transfer Size: 524288 00:09:52.314 Max Number of Namespaces: 256 00:09:52.314 Max Number of I/O Queues: 64 00:09:52.314 NVMe Specification Version (VS): 1.4 00:09:52.314 NVMe Specification Version (Identify): 1.4 00:09:52.314 Maximum Queue Entries: 2048 00:09:52.314 Contiguous Queues Required: Yes 00:09:52.314 Arbitration Mechanisms Supported 00:09:52.314 Weighted Round Robin: Not Supported 00:09:52.314 Vendor Specific: Not Supported 00:09:52.314 Reset Timeout: 7500 ms 00:09:52.314 Doorbell Stride: 4 bytes 00:09:52.314 NVM Subsystem Reset: Not Supported 00:09:52.314 Command Sets Supported 00:09:52.314 NVM Command Set: Supported 00:09:52.314 Boot Partition: Not Supported 00:09:52.314 Memory Page Size Minimum: 4096 bytes 00:09:52.314 Memory Page Size Maximum: 65536 bytes 00:09:52.314 Persistent Memory Region: Not Supported 00:09:52.314 Optional Asynchronous Events Supported 00:09:52.314 Namespace Attribute Notices: Supported 00:09:52.314 Firmware Activation Notices: Not Supported 00:09:52.314 ANA Change Notices: Not Supported 00:09:52.314 PLE Aggregate Log Change Notices: Not Supported 00:09:52.314 LBA Status Info Alert Notices: Not Supported 00:09:52.314 EGE Aggregate Log Change Notices: Not Supported 00:09:52.314 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.314 Zone Descriptor Change Notices: Not Supported 00:09:52.314 Discovery Log Change Notices: Not Supported 00:09:52.314 Controller Attributes 00:09:52.314 128-bit Host Identifier: Not Supported 00:09:52.314 Non-Operational Permissive Mode: Not Supported 00:09:52.314 NVM Sets: Not Supported 00:09:52.314 Read Recovery Levels: Not Supported 00:09:52.314 Endurance Groups: Not Supported 00:09:52.314 Predictable Latency Mode: Not Supported 00:09:52.314 Traffic Based Keep ALive: Not Supported 00:09:52.314 Namespace Granularity: Not Supported 00:09:52.314 SQ Associations: Not Supported 00:09:52.314 UUID List: Not Supported 00:09:52.314 Multi-Domain Subsystem: Not Supported 00:09:52.314 Fixed Capacity Management: Not Supported 00:09:52.314 Variable Capacity Management: Not Supported 00:09:52.314 Delete Endurance Group: Not Supported 00:09:52.314 Delete NVM Set: Not Supported 00:09:52.314 Extended LBA Formats Supported: Supported 00:09:52.314 Flexible Data Placement Supported: Not Supported 00:09:52.314 00:09:52.314 Controller Memory Buffer Support 00:09:52.314 ================================ 00:09:52.314 Supported: No 00:09:52.314 00:09:52.314 Persistent Memory Region Support 00:09:52.314 ================================ 00:09:52.314 Supported: No 00:09:52.314 00:09:52.314 Admin Command Set Attributes 00:09:52.314 ============================ 00:09:52.314 Security Send/Receive: Not Supported 00:09:52.314 Format NVM: Supported 00:09:52.314 Firmware Activate/Download: Not Supported 00:09:52.314 Namespace Management: Supported 00:09:52.314 Device Self-Test: Not Supported 00:09:52.314 Directives: Supported 00:09:52.314 NVMe-MI: Not Supported 00:09:52.314 Virtualization Management: Not Supported 00:09:52.314 Doorbell Buffer Config: Supported 00:09:52.314 Get LBA Status Capability: Not Supported 00:09:52.314 Command & Feature Lockdown Capability: Not Supported 00:09:52.314 Abort Command Limit: 4 00:09:52.314 Async Event Request Limit: 4 00:09:52.314 Number of Firmware Slots: N/A 00:09:52.314 Firmware Slot 1 Read-Only: N/A 00:09:52.314 Firmware Activation Without Reset: N/A 00:09:52.314 Multiple Update Detection Support: N/A 00:09:52.314 Firmware Update Granularity: No Information Provided 00:09:52.314 Per-Namespace SMART Log: Yes 00:09:52.314 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.315 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.315 Command Effects Log Page: Supported 00:09:52.315 Get Log Page Extended Data: Supported 00:09:52.315 Telemetry Log Pages: Not Supported 00:09:52.315 Persistent Event Log Pages: Not Supported 00:09:52.315 Supported Log Pages Log Page: May Support 00:09:52.315 Commands Supported & Effects Log Page: Not Supported 00:09:52.315 Feature Identifiers & Effects Log Page:May Support 00:09:52.315 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.315 Data Area 4 for Telemetry Log: Not Supported 00:09:52.315 Error Log Page Entries Supported: 1 00:09:52.315 Keep Alive: Not Supported 00:09:52.315 00:09:52.315 NVM Command Set Attributes 00:09:52.315 ========================== 00:09:52.315 Submission Queue Entry Size 00:09:52.315 Max: 64 00:09:52.315 Min: 64 00:09:52.315 Completion Queue Entry Size 00:09:52.315 Max: 16 00:09:52.315 Min: 16 00:09:52.315 Number of Namespaces: 256 00:09:52.315 Compare Command: Supported 00:09:52.315 Write Uncorrectable Command: Not Supported 00:09:52.315 Dataset Management Command: Supported 00:09:52.315 Write Zeroes Command: Supported 00:09:52.315 Set Features Save Field: Supported 00:09:52.315 Reservations: Not Supported 00:09:52.315 Timestamp: Supported 00:09:52.315 Copy: Supported 00:09:52.315 Volatile Write Cache: Present 00:09:52.315 Atomic Write Unit (Normal): 1 00:09:52.315 Atomic Write Unit (PFail): 1 00:09:52.315 Atomic Compare & Write Unit: 1 00:09:52.315 Fused Compare & Write: Not Supported 00:09:52.315 Scatter-Gather List 00:09:52.315 SGL Command Set: Supported 00:09:52.315 SGL Keyed: Not Supported 00:09:52.315 SGL Bit Bucket Descriptor: Not Supported 00:09:52.315 SGL Metadata Pointer: Not Supported 00:09:52.315 Oversized SGL: Not Supported 00:09:52.315 SGL Metadata Address: Not Supported 00:09:52.315 SGL Offset: Not Supported 00:09:52.315 Transport SGL Data Block: Not Supported 00:09:52.315 Replay Protected Memory Block: Not Supported 00:09:52.315 00:09:52.315 Firmware Slot Information 00:09:52.315 ========================= 00:09:52.315 Active slot: 1 00:09:52.315 Slot 1 Firmware Revision: 1.0 00:09:52.315 00:09:52.315 00:09:52.315 Commands Supported and Effects 00:09:52.315 ============================== 00:09:52.315 Admin Commands 00:09:52.315 -------------- 00:09:52.315 Delete I/O Submission Queue (00h): Supported 00:09:52.315 Create I/O Submission Queue (01h): Supported 00:09:52.315 Get Log Page (02h): Supported 00:09:52.315 Delete I/O Completion Queue (04h): Supported 00:09:52.315 Create I/O Completion Queue (05h): Supported 00:09:52.315 Identify (06h): Supported 00:09:52.315 Abort (08h): Supported 00:09:52.315 Set Features (09h): Supported 00:09:52.315 Get Features (0Ah): Supported 00:09:52.315 Asynchronous Event Request (0Ch): Supported 00:09:52.315 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.315 Directive Send (19h): Supported 00:09:52.315 Directive Receive (1Ah): Supported 00:09:52.315 Virtualization Management (1Ch): Supported 00:09:52.315 Doorbell Buffer Config (7Ch): Supported 00:09:52.315 Format NVM (80h): Supported LBA-Change 00:09:52.315 I/O Commands 00:09:52.315 ------------ 00:09:52.315 Flush (00h): Supported LBA-Change 00:09:52.315 Write (01h): Supported LBA-Change 00:09:52.315 Read (02h): Supported 00:09:52.315 Compare (05h): Supported 00:09:52.315 Write Zeroes (08h): Supported LBA-Change 00:09:52.315 Dataset Management (09h): Supported LBA-Change 00:09:52.315 Unknown (0Ch): Supported 00:09:52.315 Unknown (12h): Supported 00:09:52.315 Copy (19h): Supported LBA-Change 00:09:52.315 Unknown (1Dh): Supported LBA-Change 00:09:52.315 00:09:52.315 Error Log 00:09:52.315 ========= 00:09:52.315 00:09:52.315 Arbitration 00:09:52.315 =========== 00:09:52.315 Arbitration Burst: no limit 00:09:52.315 00:09:52.315 Power Management 00:09:52.315 ================ 00:09:52.315 Number of Power States: 1 00:09:52.315 Current Power State: Power State #0 00:09:52.315 Power State #0: 00:09:52.315 Max Power: 25.00 W 00:09:52.315 Non-Operational State: Operational 00:09:52.315 Entry Latency: 16 microseconds 00:09:52.315 Exit Latency: 4 microseconds 00:09:52.315 Relative Read Throughput: 0 00:09:52.315 Relative Read Latency: 0 00:09:52.315 Relative Write Throughput: 0 00:09:52.315 Relative Write Latency: 0 00:09:52.576 Idle Power: Not Reported 00:09:52.576 Active Power: Not Reported 00:09:52.576 Non-Operational Permissive Mode: Not Supported 00:09:52.576 00:09:52.576 Health Information 00:09:52.576 ================== 00:09:52.576 Critical Warnings: 00:09:52.576 Available Spare Space: OK 00:09:52.576 Temperature: OK 00:09:52.576 Device Reliability: OK 00:09:52.576 Read Only: No 00:09:52.576 Volatile Memory Backup: OK 00:09:52.576 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.576 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.576 Available Spare: 0% 00:09:52.576 Available Spare Threshold: 0% 00:09:52.576 Life Percentage Used: 0% 00:09:52.576 Data Units Read: 1361 00:09:52.576 Data Units Written: 632 00:09:52.576 Host Read Commands: 68004 00:09:52.576 Host Write Commands: 33515 00:09:52.576 Controller Busy Time: 0 minutes 00:09:52.576 Power Cycles: 0 00:09:52.576 Power On Hours: 0 hours 00:09:52.576 Unsafe Shutdowns: 0 00:09:52.576 Unrecoverable Media Errors: 0 00:09:52.576 Lifetime Error Log Entries: 0 00:09:52.576 Warning Temperature Time: 0 minutes 00:09:52.576 Critical Temperature Time: 0 minutes 00:09:52.576 00:09:52.576 Number of Queues 00:09:52.576 ================ 00:09:52.576 Number of I/O Submission Queues: 64 00:09:52.576 Number of I/O Completion Queues: 64 00:09:52.576 00:09:52.576 ZNS Specific Controller Data 00:09:52.576 ============================ 00:09:52.576 Zone Append Size Limit: 0 00:09:52.576 00:09:52.576 00:09:52.576 Active Namespaces 00:09:52.576 ================= 00:09:52.576 Namespace ID:1 00:09:52.576 Error Recovery Timeout: Unlimited 00:09:52.576 Command Set Identifier: NVM (00h) 00:09:52.576 Deallocate: Supported 00:09:52.576 Deallocated/Unwritten Error: Supported 00:09:52.576 Deallocated Read Value: All 0x00 00:09:52.576 Deallocate in Write Zeroes: Not Supported 00:09:52.576 Deallocated Guard Field: 0xFFFF 00:09:52.576 Flush: Supported 00:09:52.576 Reservation: Not Supported 00:09:52.576 Namespace Sharing Capabilities: Private 00:09:52.576 Size (in LBAs): 1310720 (5GiB) 00:09:52.576 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.576 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.576 Thin Provisioning: Not Supported 00:09:52.576 Per-NS Atomic Units: No 00:09:52.576 Maximum Single Source Range Length: 128 00:09:52.576 Maximum Copy Length: 128 00:09:52.576 Maximum Source Range Count: 128 00:09:52.576 NGUID/EUI64 Never Reused: No 00:09:52.576 Namespace Write Protected: No 00:09:52.576 Number of LBA Formats: 8 00:09:52.576 Current LBA Format: LBA Format #04 00:09:52.576 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.576 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.576 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.576 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.576 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.576 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.576 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.576 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.576 00:09:52.576 07:25:01 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.576 07:25:01 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:52.576 ===================================================== 00:09:52.576 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.576 ===================================================== 00:09:52.576 Controller Capabilities/Features 00:09:52.576 ================================ 00:09:52.576 Vendor ID: 1b36 00:09:52.576 Subsystem Vendor ID: 1af4 00:09:52.576 Serial Number: 12342 00:09:52.576 Model Number: QEMU NVMe Ctrl 00:09:52.576 Firmware Version: 8.0.0 00:09:52.576 Recommended Arb Burst: 6 00:09:52.576 IEEE OUI Identifier: 00 54 52 00:09:52.576 Multi-path I/O 00:09:52.576 May have multiple subsystem ports: No 00:09:52.576 May have multiple controllers: No 00:09:52.576 Associated with SR-IOV VF: No 00:09:52.576 Max Data Transfer Size: 524288 00:09:52.576 Max Number of Namespaces: 256 00:09:52.576 Max Number of I/O Queues: 64 00:09:52.576 NVMe Specification Version (VS): 1.4 00:09:52.576 NVMe Specification Version (Identify): 1.4 00:09:52.576 Maximum Queue Entries: 2048 00:09:52.576 Contiguous Queues Required: Yes 00:09:52.576 Arbitration Mechanisms Supported 00:09:52.576 Weighted Round Robin: Not Supported 00:09:52.576 Vendor Specific: Not Supported 00:09:52.576 Reset Timeout: 7500 ms 00:09:52.577 Doorbell Stride: 4 bytes 00:09:52.577 NVM Subsystem Reset: Not Supported 00:09:52.577 Command Sets Supported 00:09:52.577 NVM Command Set: Supported 00:09:52.577 Boot Partition: Not Supported 00:09:52.577 Memory Page Size Minimum: 4096 bytes 00:09:52.577 Memory Page Size Maximum: 65536 bytes 00:09:52.577 Persistent Memory Region: Not Supported 00:09:52.577 Optional Asynchronous Events Supported 00:09:52.577 Namespace Attribute Notices: Supported 00:09:52.577 Firmware Activation Notices: Not Supported 00:09:52.577 ANA Change Notices: Not Supported 00:09:52.577 PLE Aggregate Log Change Notices: Not Supported 00:09:52.577 LBA Status Info Alert Notices: Not Supported 00:09:52.577 EGE Aggregate Log Change Notices: Not Supported 00:09:52.577 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.577 Zone Descriptor Change Notices: Not Supported 00:09:52.577 Discovery Log Change Notices: Not Supported 00:09:52.577 Controller Attributes 00:09:52.577 128-bit Host Identifier: Not Supported 00:09:52.577 Non-Operational Permissive Mode: Not Supported 00:09:52.577 NVM Sets: Not Supported 00:09:52.577 Read Recovery Levels: Not Supported 00:09:52.577 Endurance Groups: Not Supported 00:09:52.577 Predictable Latency Mode: Not Supported 00:09:52.577 Traffic Based Keep ALive: Not Supported 00:09:52.577 Namespace Granularity: Not Supported 00:09:52.577 SQ Associations: Not Supported 00:09:52.577 UUID List: Not Supported 00:09:52.577 Multi-Domain Subsystem: Not Supported 00:09:52.577 Fixed Capacity Management: Not Supported 00:09:52.577 Variable Capacity Management: Not Supported 00:09:52.577 Delete Endurance Group: Not Supported 00:09:52.577 Delete NVM Set: Not Supported 00:09:52.577 Extended LBA Formats Supported: Supported 00:09:52.577 Flexible Data Placement Supported: Not Supported 00:09:52.577 00:09:52.577 Controller Memory Buffer Support 00:09:52.577 ================================ 00:09:52.577 Supported: No 00:09:52.577 00:09:52.577 Persistent Memory Region Support 00:09:52.577 ================================ 00:09:52.577 Supported: No 00:09:52.577 00:09:52.577 Admin Command Set Attributes 00:09:52.577 ============================ 00:09:52.577 Security Send/Receive: Not Supported 00:09:52.577 Format NVM: Supported 00:09:52.577 Firmware Activate/Download: Not Supported 00:09:52.577 Namespace Management: Supported 00:09:52.577 Device Self-Test: Not Supported 00:09:52.577 Directives: Supported 00:09:52.577 NVMe-MI: Not Supported 00:09:52.577 Virtualization Management: Not Supported 00:09:52.577 Doorbell Buffer Config: Supported 00:09:52.577 Get LBA Status Capability: Not Supported 00:09:52.577 Command & Feature Lockdown Capability: Not Supported 00:09:52.577 Abort Command Limit: 4 00:09:52.577 Async Event Request Limit: 4 00:09:52.577 Number of Firmware Slots: N/A 00:09:52.577 Firmware Slot 1 Read-Only: N/A 00:09:52.577 Firmware Activation Without Reset: N/A 00:09:52.577 Multiple Update Detection Support: N/A 00:09:52.577 Firmware Update Granularity: No Information Provided 00:09:52.577 Per-Namespace SMART Log: Yes 00:09:52.577 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.577 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:52.577 Command Effects Log Page: Supported 00:09:52.577 Get Log Page Extended Data: Supported 00:09:52.577 Telemetry Log Pages: Not Supported 00:09:52.577 Persistent Event Log Pages: Not Supported 00:09:52.577 Supported Log Pages Log Page: May Support 00:09:52.577 Commands Supported & Effects Log Page: Not Supported 00:09:52.577 Feature Identifiers & Effects Log Page:May Support 00:09:52.577 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.577 Data Area 4 for Telemetry Log: Not Supported 00:09:52.577 Error Log Page Entries Supported: 1 00:09:52.577 Keep Alive: Not Supported 00:09:52.577 00:09:52.577 NVM Command Set Attributes 00:09:52.577 ========================== 00:09:52.577 Submission Queue Entry Size 00:09:52.577 Max: 64 00:09:52.577 Min: 64 00:09:52.577 Completion Queue Entry Size 00:09:52.577 Max: 16 00:09:52.577 Min: 16 00:09:52.577 Number of Namespaces: 256 00:09:52.577 Compare Command: Supported 00:09:52.577 Write Uncorrectable Command: Not Supported 00:09:52.577 Dataset Management Command: Supported 00:09:52.577 Write Zeroes Command: Supported 00:09:52.577 Set Features Save Field: Supported 00:09:52.577 Reservations: Not Supported 00:09:52.577 Timestamp: Supported 00:09:52.577 Copy: Supported 00:09:52.577 Volatile Write Cache: Present 00:09:52.577 Atomic Write Unit (Normal): 1 00:09:52.577 Atomic Write Unit (PFail): 1 00:09:52.577 Atomic Compare & Write Unit: 1 00:09:52.577 Fused Compare & Write: Not Supported 00:09:52.577 Scatter-Gather List 00:09:52.577 SGL Command Set: Supported 00:09:52.577 SGL Keyed: Not Supported 00:09:52.577 SGL Bit Bucket Descriptor: Not Supported 00:09:52.577 SGL Metadata Pointer: Not Supported 00:09:52.577 Oversized SGL: Not Supported 00:09:52.577 SGL Metadata Address: Not Supported 00:09:52.577 SGL Offset: Not Supported 00:09:52.577 Transport SGL Data Block: Not Supported 00:09:52.577 Replay Protected Memory Block: Not Supported 00:09:52.577 00:09:52.577 Firmware Slot Information 00:09:52.577 ========================= 00:09:52.577 Active slot: 1 00:09:52.577 Slot 1 Firmware Revision: 1.0 00:09:52.577 00:09:52.577 00:09:52.577 Commands Supported and Effects 00:09:52.577 ============================== 00:09:52.577 Admin Commands 00:09:52.577 -------------- 00:09:52.577 Delete I/O Submission Queue (00h): Supported 00:09:52.577 Create I/O Submission Queue (01h): Supported 00:09:52.577 Get Log Page (02h): Supported 00:09:52.577 Delete I/O Completion Queue (04h): Supported 00:09:52.577 Create I/O Completion Queue (05h): Supported 00:09:52.577 Identify (06h): Supported 00:09:52.577 Abort (08h): Supported 00:09:52.577 Set Features (09h): Supported 00:09:52.577 Get Features (0Ah): Supported 00:09:52.577 Asynchronous Event Request (0Ch): Supported 00:09:52.577 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.577 Directive Send (19h): Supported 00:09:52.577 Directive Receive (1Ah): Supported 00:09:52.577 Virtualization Management (1Ch): Supported 00:09:52.577 Doorbell Buffer Config (7Ch): Supported 00:09:52.577 Format NVM (80h): Supported LBA-Change 00:09:52.577 I/O Commands 00:09:52.577 ------------ 00:09:52.577 Flush (00h): Supported LBA-Change 00:09:52.577 Write (01h): Supported LBA-Change 00:09:52.577 Read (02h): Supported 00:09:52.577 Compare (05h): Supported 00:09:52.577 Write Zeroes (08h): Supported LBA-Change 00:09:52.577 Dataset Management (09h): Supported LBA-Change 00:09:52.577 Unknown (0Ch): Supported 00:09:52.577 Unknown (12h): Supported 00:09:52.578 Copy (19h): Supported LBA-Change 00:09:52.578 Unknown (1Dh): Supported LBA-Change 00:09:52.578 00:09:52.578 Error Log 00:09:52.578 ========= 00:09:52.578 00:09:52.578 Arbitration 00:09:52.578 =========== 00:09:52.578 Arbitration Burst: no limit 00:09:52.578 00:09:52.578 Power Management 00:09:52.578 ================ 00:09:52.578 Number of Power States: 1 00:09:52.578 Current Power State: Power State #0 00:09:52.578 Power State #0: 00:09:52.578 Max Power: 25.00 W 00:09:52.578 Non-Operational State: Operational 00:09:52.578 Entry Latency: 16 microseconds 00:09:52.578 Exit Latency: 4 microseconds 00:09:52.578 Relative Read Throughput: 0 00:09:52.578 Relative Read Latency: 0 00:09:52.578 Relative Write Throughput: 0 00:09:52.578 Relative Write Latency: 0 00:09:52.578 Idle Power: Not Reported 00:09:52.578 Active Power: Not Reported 00:09:52.578 Non-Operational Permissive Mode: Not Supported 00:09:52.578 00:09:52.578 Health Information 00:09:52.578 ================== 00:09:52.578 Critical Warnings: 00:09:52.578 Available Spare Space: OK 00:09:52.578 Temperature: OK 00:09:52.578 Device Reliability: OK 00:09:52.578 Read Only: No 00:09:52.578 Volatile Memory Backup: OK 00:09:52.578 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.578 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.578 Available Spare: 0% 00:09:52.578 Available Spare Threshold: 0% 00:09:52.578 Life Percentage Used: 0% 00:09:52.578 Data Units Read: 4271 00:09:52.578 Data Units Written: 1978 00:09:52.578 Host Read Commands: 205806 00:09:52.578 Host Write Commands: 101206 00:09:52.578 Controller Busy Time: 0 minutes 00:09:52.578 Power Cycles: 0 00:09:52.578 Power On Hours: 0 hours 00:09:52.578 Unsafe Shutdowns: 0 00:09:52.578 Unrecoverable Media Errors: 0 00:09:52.578 Lifetime Error Log Entries: 0 00:09:52.578 Warning Temperature Time: 0 minutes 00:09:52.578 Critical Temperature Time: 0 minutes 00:09:52.578 00:09:52.578 Number of Queues 00:09:52.578 ================ 00:09:52.578 Number of I/O Submission Queues: 64 00:09:52.578 Number of I/O Completion Queues: 64 00:09:52.578 00:09:52.578 ZNS Specific Controller Data 00:09:52.578 ============================ 00:09:52.578 Zone Append Size Limit: 0 00:09:52.578 00:09:52.578 00:09:52.578 Active Namespaces 00:09:52.578 ================= 00:09:52.578 Namespace ID:1 00:09:52.578 Error Recovery Timeout: Unlimited 00:09:52.578 Command Set Identifier: NVM (00h) 00:09:52.578 Deallocate: Supported 00:09:52.578 Deallocated/Unwritten Error: Supported 00:09:52.578 Deallocated Read Value: All 0x00 00:09:52.578 Deallocate in Write Zeroes: Not Supported 00:09:52.578 Deallocated Guard Field: 0xFFFF 00:09:52.578 Flush: Supported 00:09:52.578 Reservation: Not Supported 00:09:52.578 Namespace Sharing Capabilities: Private 00:09:52.578 Size (in LBAs): 1048576 (4GiB) 00:09:52.578 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.578 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.578 Thin Provisioning: Not Supported 00:09:52.578 Per-NS Atomic Units: No 00:09:52.578 Maximum Single Source Range Length: 128 00:09:52.578 Maximum Copy Length: 128 00:09:52.578 Maximum Source Range Count: 128 00:09:52.578 NGUID/EUI64 Never Reused: No 00:09:52.578 Namespace Write Protected: No 00:09:52.578 Number of LBA Formats: 8 00:09:52.578 Current LBA Format: LBA Format #04 00:09:52.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.578 00:09:52.578 Namespace ID:2 00:09:52.578 Error Recovery Timeout: Unlimited 00:09:52.578 Command Set Identifier: NVM (00h) 00:09:52.578 Deallocate: Supported 00:09:52.578 Deallocated/Unwritten Error: Supported 00:09:52.578 Deallocated Read Value: All 0x00 00:09:52.578 Deallocate in Write Zeroes: Not Supported 00:09:52.578 Deallocated Guard Field: 0xFFFF 00:09:52.578 Flush: Supported 00:09:52.578 Reservation: Not Supported 00:09:52.578 Namespace Sharing Capabilities: Private 00:09:52.578 Size (in LBAs): 1048576 (4GiB) 00:09:52.578 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.578 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.578 Thin Provisioning: Not Supported 00:09:52.578 Per-NS Atomic Units: No 00:09:52.578 Maximum Single Source Range Length: 128 00:09:52.578 Maximum Copy Length: 128 00:09:52.578 Maximum Source Range Count: 128 00:09:52.578 NGUID/EUI64 Never Reused: No 00:09:52.578 Namespace Write Protected: No 00:09:52.578 Number of LBA Formats: 8 00:09:52.578 Current LBA Format: LBA Format #04 00:09:52.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.578 00:09:52.578 Namespace ID:3 00:09:52.578 Error Recovery Timeout: Unlimited 00:09:52.578 Command Set Identifier: NVM (00h) 00:09:52.578 Deallocate: Supported 00:09:52.578 Deallocated/Unwritten Error: Supported 00:09:52.578 Deallocated Read Value: All 0x00 00:09:52.578 Deallocate in Write Zeroes: Not Supported 00:09:52.578 Deallocated Guard Field: 0xFFFF 00:09:52.578 Flush: Supported 00:09:52.578 Reservation: Not Supported 00:09:52.578 Namespace Sharing Capabilities: Private 00:09:52.578 Size (in LBAs): 1048576 (4GiB) 00:09:52.578 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.578 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.578 Thin Provisioning: Not Supported 00:09:52.578 Per-NS Atomic Units: No 00:09:52.578 Maximum Single Source Range Length: 128 00:09:52.578 Maximum Copy Length: 128 00:09:52.578 Maximum Source Range Count: 128 00:09:52.578 NGUID/EUI64 Never Reused: No 00:09:52.578 Namespace Write Protected: No 00:09:52.578 Number of LBA Formats: 8 00:09:52.578 Current LBA Format: LBA Format #04 00:09:52.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.579 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.579 00:09:52.579 07:25:01 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.579 07:25:01 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:52.838 ===================================================== 00:09:52.838 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.838 ===================================================== 00:09:52.838 Controller Capabilities/Features 00:09:52.838 ================================ 00:09:52.838 Vendor ID: 1b36 00:09:52.838 Subsystem Vendor ID: 1af4 00:09:52.838 Serial Number: 12343 00:09:52.838 Model Number: QEMU NVMe Ctrl 00:09:52.838 Firmware Version: 8.0.0 00:09:52.838 Recommended Arb Burst: 6 00:09:52.838 IEEE OUI Identifier: 00 54 52 00:09:52.838 Multi-path I/O 00:09:52.838 May have multiple subsystem ports: No 00:09:52.838 May have multiple controllers: Yes 00:09:52.838 Associated with SR-IOV VF: No 00:09:52.838 Max Data Transfer Size: 524288 00:09:52.838 Max Number of Namespaces: 256 00:09:52.838 Max Number of I/O Queues: 64 00:09:52.838 NVMe Specification Version (VS): 1.4 00:09:52.838 NVMe Specification Version (Identify): 1.4 00:09:52.839 Maximum Queue Entries: 2048 00:09:52.839 Contiguous Queues Required: Yes 00:09:52.839 Arbitration Mechanisms Supported 00:09:52.839 Weighted Round Robin: Not Supported 00:09:52.839 Vendor Specific: Not Supported 00:09:52.839 Reset Timeout: 7500 ms 00:09:52.839 Doorbell Stride: 4 bytes 00:09:52.839 NVM Subsystem Reset: Not Supported 00:09:52.839 Command Sets Supported 00:09:52.839 NVM Command Set: Supported 00:09:52.839 Boot Partition: Not Supported 00:09:52.839 Memory Page Size Minimum: 4096 bytes 00:09:52.839 Memory Page Size Maximum: 65536 bytes 00:09:52.839 Persistent Memory Region: Not Supported 00:09:52.839 Optional Asynchronous Events Supported 00:09:52.839 Namespace Attribute Notices: Supported 00:09:52.839 Firmware Activation Notices: Not Supported 00:09:52.839 ANA Change Notices: Not Supported 00:09:52.839 PLE Aggregate Log Change Notices: Not Supported 00:09:52.839 LBA Status Info Alert Notices: Not Supported 00:09:52.839 EGE Aggregate Log Change Notices: Not Supported 00:09:52.839 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.839 Zone Descriptor Change Notices: Not Supported 00:09:52.839 Discovery Log Change Notices: Not Supported 00:09:52.839 Controller Attributes 00:09:52.839 128-bit Host Identifier: Not Supported 00:09:52.839 Non-Operational Permissive Mode: Not Supported 00:09:52.839 NVM Sets: Not Supported 00:09:52.839 Read Recovery Levels: Not Supported 00:09:52.839 Endurance Groups: Supported 00:09:52.839 Predictable Latency Mode: Not Supported 00:09:52.839 Traffic Based Keep ALive: Not Supported 00:09:52.839 Namespace Granularity: Not Supported 00:09:52.839 SQ Associations: Not Supported 00:09:52.839 UUID List: Not Supported 00:09:52.839 Multi-Domain Subsystem: Not Supported 00:09:52.839 Fixed Capacity Management: Not Supported 00:09:52.839 Variable Capacity Management: Not Supported 00:09:52.839 Delete Endurance Group: Not Supported 00:09:52.839 Delete NVM Set: Not Supported 00:09:52.839 Extended LBA Formats Supported: Supported 00:09:52.839 Flexible Data Placement Supported: Supported 00:09:52.839 00:09:52.839 Controller Memory Buffer Support 00:09:52.839 ================================ 00:09:52.839 Supported: No 00:09:52.839 00:09:52.839 Persistent Memory Region Support 00:09:52.839 ================================ 00:09:52.839 Supported: No 00:09:52.839 00:09:52.839 Admin Command Set Attributes 00:09:52.839 ============================ 00:09:52.839 Security Send/Receive: Not Supported 00:09:52.839 Format NVM: Supported 00:09:52.839 Firmware Activate/Download: Not Supported 00:09:52.839 Namespace Management: Supported 00:09:52.839 Device Self-Test: Not Supported 00:09:52.839 Directives: Supported 00:09:52.839 NVMe-MI: Not Supported 00:09:52.839 Virtualization Management: Not Supported 00:09:52.839 Doorbell Buffer Config: Supported 00:09:52.839 Get LBA Status Capability: Not Supported 00:09:52.839 Command & Feature Lockdown Capability: Not Supported 00:09:52.839 Abort Command Limit: 4 00:09:52.839 Async Event Request Limit: 4 00:09:52.839 Number of Firmware Slots: N/A 00:09:52.839 Firmware Slot 1 Read-Only: N/A 00:09:52.839 Firmware Activation Without Reset: N/A 00:09:52.839 Multiple Update Detection Support: N/A 00:09:52.839 Firmware Update Granularity: No Information Provided 00:09:52.839 Per-Namespace SMART Log: Yes 00:09:52.839 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.839 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.839 Command Effects Log Page: Supported 00:09:52.839 Get Log Page Extended Data: Supported 00:09:52.839 Telemetry Log Pages: Not Supported 00:09:52.839 Persistent Event Log Pages: Not Supported 00:09:52.839 Supported Log Pages Log Page: May Support 00:09:52.839 Commands Supported & Effects Log Page: Not Supported 00:09:52.839 Feature Identifiers & Effects Log Page:May Support 00:09:52.839 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.839 Data Area 4 for Telemetry Log: Not Supported 00:09:52.839 Error Log Page Entries Supported: 1 00:09:52.839 Keep Alive: Not Supported 00:09:52.839 00:09:52.839 NVM Command Set Attributes 00:09:52.839 ========================== 00:09:52.839 Submission Queue Entry Size 00:09:52.839 Max: 64 00:09:52.839 Min: 64 00:09:52.839 Completion Queue Entry Size 00:09:52.839 Max: 16 00:09:52.839 Min: 16 00:09:52.839 Number of Namespaces: 256 00:09:52.839 Compare Command: Supported 00:09:52.839 Write Uncorrectable Command: Not Supported 00:09:52.839 Dataset Management Command: Supported 00:09:52.839 Write Zeroes Command: Supported 00:09:52.839 Set Features Save Field: Supported 00:09:52.839 Reservations: Not Supported 00:09:52.839 Timestamp: Supported 00:09:52.839 Copy: Supported 00:09:52.839 Volatile Write Cache: Present 00:09:52.839 Atomic Write Unit (Normal): 1 00:09:52.839 Atomic Write Unit (PFail): 1 00:09:52.839 Atomic Compare & Write Unit: 1 00:09:52.839 Fused Compare & Write: Not Supported 00:09:52.839 Scatter-Gather List 00:09:52.839 SGL Command Set: Supported 00:09:52.839 SGL Keyed: Not Supported 00:09:52.839 SGL Bit Bucket Descriptor: Not Supported 00:09:52.839 SGL Metadata Pointer: Not Supported 00:09:52.839 Oversized SGL: Not Supported 00:09:52.839 SGL Metadata Address: Not Supported 00:09:52.839 SGL Offset: Not Supported 00:09:52.839 Transport SGL Data Block: Not Supported 00:09:52.839 Replay Protected Memory Block: Not Supported 00:09:52.839 00:09:52.839 Firmware Slot Information 00:09:52.839 ========================= 00:09:52.839 Active slot: 1 00:09:52.839 Slot 1 Firmware Revision: 1.0 00:09:52.839 00:09:52.839 00:09:52.839 Commands Supported and Effects 00:09:52.839 ============================== 00:09:52.839 Admin Commands 00:09:52.839 -------------- 00:09:52.839 Delete I/O Submission Queue (00h): Supported 00:09:52.839 Create I/O Submission Queue (01h): Supported 00:09:52.839 Get Log Page (02h): Supported 00:09:52.839 Delete I/O Completion Queue (04h): Supported 00:09:52.839 Create I/O Completion Queue (05h): Supported 00:09:52.839 Identify (06h): Supported 00:09:52.839 Abort (08h): Supported 00:09:52.839 Set Features (09h): Supported 00:09:52.839 Get Features (0Ah): Supported 00:09:52.839 Asynchronous Event Request (0Ch): Supported 00:09:52.839 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.839 Directive Send (19h): Supported 00:09:52.839 Directive Receive (1Ah): Supported 00:09:52.839 Virtualization Management (1Ch): Supported 00:09:52.839 Doorbell Buffer Config (7Ch): Supported 00:09:52.839 Format NVM (80h): Supported LBA-Change 00:09:52.839 I/O Commands 00:09:52.839 ------------ 00:09:52.839 Flush (00h): Supported LBA-Change 00:09:52.839 Write (01h): Supported LBA-Change 00:09:52.839 Read (02h): Supported 00:09:52.839 Compare (05h): Supported 00:09:52.839 Write Zeroes (08h): Supported LBA-Change 00:09:52.839 Dataset Management (09h): Supported LBA-Change 00:09:52.839 Unknown (0Ch): Supported 00:09:52.840 Unknown (12h): Supported 00:09:52.840 Copy (19h): Supported LBA-Change 00:09:52.840 Unknown (1Dh): Supported LBA-Change 00:09:52.840 00:09:52.840 Error Log 00:09:52.840 ========= 00:09:52.840 00:09:52.840 Arbitration 00:09:52.840 =========== 00:09:52.840 Arbitration Burst: no limit 00:09:52.840 00:09:52.840 Power Management 00:09:52.840 ================ 00:09:52.840 Number of Power States: 1 00:09:52.840 Current Power State: Power State #0 00:09:52.840 Power State #0: 00:09:52.840 Max Power: 25.00 W 00:09:52.840 Non-Operational State: Operational 00:09:52.840 Entry Latency: 16 microseconds 00:09:52.840 Exit Latency: 4 microseconds 00:09:52.840 Relative Read Throughput: 0 00:09:52.840 Relative Read Latency: 0 00:09:52.840 Relative Write Throughput: 0 00:09:52.840 Relative Write Latency: 0 00:09:52.840 Idle Power: Not Reported 00:09:52.840 Active Power: Not Reported 00:09:52.840 Non-Operational Permissive Mode: Not Supported 00:09:52.840 00:09:52.840 Health Information 00:09:52.840 ================== 00:09:52.840 Critical Warnings: 00:09:52.840 Available Spare Space: OK 00:09:52.840 Temperature: OK 00:09:52.840 Device Reliability: OK 00:09:52.840 Read Only: No 00:09:52.840 Volatile Memory Backup: OK 00:09:52.840 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.840 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.840 Available Spare: 0% 00:09:52.840 Available Spare Threshold: 0% 00:09:52.840 Life Percentage Used: 0% 00:09:52.840 Data Units Read: 1589 00:09:52.840 Data Units Written: 743 00:09:52.840 Host Read Commands: 69819 00:09:52.840 Host Write Commands: 34411 00:09:52.840 Controller Busy Time: 0 minutes 00:09:52.840 Power Cycles: 0 00:09:52.840 Power On Hours: 0 hours 00:09:52.840 Unsafe Shutdowns: 0 00:09:52.840 Unrecoverable Media Errors: 0 00:09:52.840 Lifetime Error Log Entries: 0 00:09:52.840 Warning Temperature Time: 0 minutes 00:09:52.840 Critical Temperature Time: 0 minutes 00:09:52.840 00:09:52.840 Number of Queues 00:09:52.840 ================ 00:09:52.840 Number of I/O Submission Queues: 64 00:09:52.840 Number of I/O Completion Queues: 64 00:09:52.840 00:09:52.840 ZNS Specific Controller Data 00:09:52.840 ============================ 00:09:52.840 Zone Append Size Limit: 0 00:09:52.840 00:09:52.840 00:09:52.840 Active Namespaces 00:09:52.840 ================= 00:09:52.840 Namespace ID:1 00:09:52.840 Error Recovery Timeout: Unlimited 00:09:52.840 Command Set Identifier: NVM (00h) 00:09:52.840 Deallocate: Supported 00:09:52.840 Deallocated/Unwritten Error: Supported 00:09:52.840 Deallocated Read Value: All 0x00 00:09:52.840 Deallocate in Write Zeroes: Not Supported 00:09:52.840 Deallocated Guard Field: 0xFFFF 00:09:52.840 Flush: Supported 00:09:52.840 Reservation: Not Supported 00:09:52.840 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.840 Size (in LBAs): 262144 (1GiB) 00:09:52.840 Capacity (in LBAs): 262144 (1GiB) 00:09:52.840 Utilization (in LBAs): 262144 (1GiB) 00:09:52.840 Thin Provisioning: Not Supported 00:09:52.840 Per-NS Atomic Units: No 00:09:52.840 Maximum Single Source Range Length: 128 00:09:52.840 Maximum Copy Length: 128 00:09:52.840 Maximum Source Range Count: 128 00:09:52.840 NGUID/EUI64 Never Reused: No 00:09:52.840 Namespace Write Protected: No 00:09:52.840 Endurance group ID: 1 00:09:52.840 Number of LBA Formats: 8 00:09:52.840 Current LBA Format: LBA Format #04 00:09:52.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.840 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.840 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.840 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.840 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.840 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.840 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.840 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.840 00:09:52.840 Get Feature FDP: 00:09:52.840 ================ 00:09:52.840 Enabled: Yes 00:09:52.840 FDP configuration index: 0 00:09:52.840 00:09:52.840 FDP configurations log page 00:09:52.840 =========================== 00:09:52.840 Number of FDP configurations: 1 00:09:52.840 Version: 0 00:09:52.840 Size: 112 00:09:52.840 FDP Configuration Descriptor: 0 00:09:52.840 Descriptor Size: 96 00:09:52.840 Reclaim Group Identifier format: 2 00:09:52.840 FDP Volatile Write Cache: Not Present 00:09:52.840 FDP Configuration: Valid 00:09:52.840 Vendor Specific Size: 0 00:09:52.840 Number of Reclaim Groups: 2 00:09:52.840 Number of Recalim Unit Handles: 8 00:09:52.840 Max Placement Identifiers: 128 00:09:52.840 Number of Namespaces Suppprted: 256 00:09:52.840 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.840 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.840 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.840 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.840 00:09:52.840 FDP reclaim unit handle usage log page 00:09:52.840 ====================================== 00:09:52.840 Number of Reclaim Unit Handles: 8 00:09:52.840 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.840 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.840 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.840 00:09:52.840 FDP statistics log page 00:09:52.840 ======================= 00:09:52.840 Host bytes with metadata written: 487010304 00:09:52.840 Media bytes with metadata written: 487198720 00:09:52.840 Media bytes erased: 0 00:09:52.840 00:09:52.840 FDP events log page 00:09:52.840 =================== 00:09:52.840 Number of FDP events: 0 00:09:52.840 00:09:52.840 00:09:52.840 real 0m1.093s 00:09:52.840 user 0m0.386s 00:09:52.840 sys 0m0.482s 00:09:52.840 07:25:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.840 07:25:02 -- common/autotest_common.sh@10 -- # set +x 00:09:52.840 ************************************ 00:09:52.840 END TEST nvme_identify 00:09:52.840 ************************************ 00:09:52.840 07:25:02 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:52.840 07:25:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.840 07:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.840 07:25:02 -- common/autotest_common.sh@10 -- # set +x 00:09:52.840 ************************************ 00:09:52.840 START TEST nvme_perf 00:09:52.840 ************************************ 00:09:52.841 07:25:02 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:52.841 07:25:02 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:54.223 Initializing NVMe Controllers 00:09:54.223 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:54.223 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:54.223 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:54.223 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:54.223 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:54.223 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:54.223 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:54.223 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:54.223 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:54.223 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:54.223 Initialization complete. Launching workers. 00:09:54.223 ======================================================== 00:09:54.223 Latency(us) 00:09:54.223 Device Information : IOPS MiB/s Average min max 00:09:54.223 PCIE (0000:00:06.0) NSID 1 from core 0: 18276.02 214.17 7000.62 5057.99 23984.15 00:09:54.223 PCIE (0000:00:07.0) NSID 1 from core 0: 18276.02 214.17 6996.39 5203.85 23232.69 00:09:54.223 PCIE (0000:00:09.0) NSID 1 from core 0: 18276.02 214.17 6990.73 5147.30 22928.50 00:09:54.223 PCIE (0000:00:08.0) NSID 1 from core 0: 18276.02 214.17 6984.95 5211.39 22068.80 00:09:54.223 PCIE (0000:00:08.0) NSID 2 from core 0: 18276.02 214.17 6979.15 5214.64 21216.63 00:09:54.223 PCIE (0000:00:08.0) NSID 3 from core 0: 18276.02 214.17 6973.51 5226.78 20303.19 00:09:54.223 ======================================================== 00:09:54.223 Total : 109656.12 1285.03 6987.56 5057.99 23984.15 00:09:54.223 00:09:54.223 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:54.223 ================================================================================= 00:09:54.223 1.00000% : 5217.674us 00:09:54.223 10.00000% : 5545.354us 00:09:54.223 25.00000% : 5948.652us 00:09:54.223 50.00000% : 6604.012us 00:09:54.223 75.00000% : 7208.960us 00:09:54.223 90.00000% : 9175.040us 00:09:54.223 95.00000% : 10384.935us 00:09:54.223 98.00000% : 11846.892us 00:09:54.223 99.00000% : 15123.692us 00:09:54.223 99.50000% : 21475.643us 00:09:54.223 99.90000% : 23492.135us 00:09:54.223 99.99000% : 23996.258us 00:09:54.223 99.99900% : 23996.258us 00:09:54.223 99.99990% : 23996.258us 00:09:54.224 99.99999% : 23996.258us 00:09:54.224 00:09:54.224 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:54.224 ================================================================================= 00:09:54.224 1.00000% : 5343.705us 00:09:54.224 10.00000% : 5671.385us 00:09:54.224 25.00000% : 5999.065us 00:09:54.224 50.00000% : 6553.600us 00:09:54.224 75.00000% : 7108.135us 00:09:54.224 90.00000% : 9175.040us 00:09:54.224 95.00000% : 10334.523us 00:09:54.224 98.00000% : 11695.655us 00:09:54.224 99.00000% : 14922.043us 00:09:54.224 99.50000% : 20769.871us 00:09:54.224 99.90000% : 22786.363us 00:09:54.224 99.99000% : 23290.486us 00:09:54.224 99.99900% : 23290.486us 00:09:54.224 99.99990% : 23290.486us 00:09:54.224 99.99999% : 23290.486us 00:09:54.224 00:09:54.224 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:54.224 ================================================================================= 00:09:54.224 1.00000% : 5343.705us 00:09:54.224 10.00000% : 5671.385us 00:09:54.224 25.00000% : 6024.271us 00:09:54.224 50.00000% : 6553.600us 00:09:54.224 75.00000% : 7108.135us 00:09:54.224 90.00000% : 9124.628us 00:09:54.224 95.00000% : 10384.935us 00:09:54.224 98.00000% : 11897.305us 00:09:54.224 99.00000% : 14922.043us 00:09:54.224 99.50000% : 20366.572us 00:09:54.224 99.90000% : 22483.889us 00:09:54.224 99.99000% : 22988.012us 00:09:54.224 99.99900% : 22988.012us 00:09:54.224 99.99990% : 22988.012us 00:09:54.224 99.99999% : 22988.012us 00:09:54.224 00:09:54.224 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:54.224 ================================================================================= 00:09:54.224 1.00000% : 5368.911us 00:09:54.224 10.00000% : 5671.385us 00:09:54.224 25.00000% : 6024.271us 00:09:54.224 50.00000% : 6553.600us 00:09:54.224 75.00000% : 7108.135us 00:09:54.224 90.00000% : 8822.154us 00:09:54.224 95.00000% : 10687.409us 00:09:54.224 98.00000% : 12351.015us 00:09:54.224 99.00000% : 14922.043us 00:09:54.224 99.50000% : 19559.975us 00:09:54.224 99.90000% : 21576.468us 00:09:54.224 99.99000% : 22080.591us 00:09:54.224 99.99900% : 22080.591us 00:09:54.224 99.99990% : 22080.591us 00:09:54.224 99.99999% : 22080.591us 00:09:54.224 00:09:54.224 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:54.224 ================================================================================= 00:09:54.224 1.00000% : 5368.911us 00:09:54.224 10.00000% : 5671.385us 00:09:54.224 25.00000% : 6024.271us 00:09:54.224 50.00000% : 6553.600us 00:09:54.224 75.00000% : 7108.135us 00:09:54.224 90.00000% : 8771.742us 00:09:54.224 95.00000% : 10435.348us 00:09:54.224 98.00000% : 12703.902us 00:09:54.224 99.00000% : 13913.797us 00:09:54.224 99.50000% : 18652.554us 00:09:54.224 99.90000% : 20769.871us 00:09:54.224 99.99000% : 21273.994us 00:09:54.224 99.99900% : 21273.994us 00:09:54.224 99.99990% : 21273.994us 00:09:54.224 99.99999% : 21273.994us 00:09:54.224 00:09:54.224 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:54.224 ================================================================================= 00:09:54.224 1.00000% : 5368.911us 00:09:54.224 10.00000% : 5671.385us 00:09:54.224 25.00000% : 6024.271us 00:09:54.224 50.00000% : 6553.600us 00:09:54.224 75.00000% : 7108.135us 00:09:54.224 90.00000% : 8822.154us 00:09:54.224 95.00000% : 10485.760us 00:09:54.224 98.00000% : 12552.665us 00:09:54.224 99.00000% : 14317.095us 00:09:54.224 99.50000% : 17845.957us 00:09:54.224 99.90000% : 19862.449us 00:09:54.224 99.99000% : 20366.572us 00:09:54.224 99.99900% : 20366.572us 00:09:54.224 99.99990% : 20366.572us 00:09:54.224 99.99999% : 20366.572us 00:09:54.224 00:09:54.224 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:54.224 ============================================================================== 00:09:54.224 Range in us Cumulative IO count 00:09:54.224 5041.231 - 5066.437: 0.0109% ( 2) 00:09:54.224 5066.437 - 5091.643: 0.0819% ( 13) 00:09:54.224 5091.643 - 5116.849: 0.2021% ( 22) 00:09:54.224 5116.849 - 5142.055: 0.3387% ( 25) 00:09:54.224 5142.055 - 5167.262: 0.6228% ( 52) 00:09:54.224 5167.262 - 5192.468: 0.9342% ( 57) 00:09:54.224 5192.468 - 5217.674: 1.2019% ( 49) 00:09:54.224 5217.674 - 5242.880: 1.5406% ( 62) 00:09:54.224 5242.880 - 5268.086: 2.0378% ( 91) 00:09:54.224 5268.086 - 5293.292: 2.4967% ( 84) 00:09:54.224 5293.292 - 5318.498: 3.0267% ( 97) 00:09:54.224 5318.498 - 5343.705: 3.6167% ( 108) 00:09:54.224 5343.705 - 5368.911: 4.3597% ( 136) 00:09:54.224 5368.911 - 5394.117: 5.0699% ( 130) 00:09:54.224 5394.117 - 5419.323: 5.8730% ( 147) 00:09:54.224 5419.323 - 5444.529: 6.6597% ( 144) 00:09:54.224 5444.529 - 5469.735: 7.5557% ( 164) 00:09:54.224 5469.735 - 5494.942: 8.4299% ( 160) 00:09:54.224 5494.942 - 5520.148: 9.3094% ( 161) 00:09:54.224 5520.148 - 5545.354: 10.1453% ( 153) 00:09:54.224 5545.354 - 5570.560: 11.0632% ( 168) 00:09:54.224 5570.560 - 5595.766: 11.9865% ( 169) 00:09:54.224 5595.766 - 5620.972: 12.9371% ( 174) 00:09:54.224 5620.972 - 5646.178: 13.8221% ( 162) 00:09:54.224 5646.178 - 5671.385: 14.7837% ( 176) 00:09:54.224 5671.385 - 5696.591: 15.7069% ( 169) 00:09:54.224 5696.591 - 5721.797: 16.6685% ( 176) 00:09:54.224 5721.797 - 5747.003: 17.5535% ( 162) 00:09:54.224 5747.003 - 5772.209: 18.5424% ( 181) 00:09:54.224 5772.209 - 5797.415: 19.4712% ( 170) 00:09:54.224 5797.415 - 5822.622: 20.4600% ( 181) 00:09:54.224 5822.622 - 5847.828: 21.4926% ( 189) 00:09:54.224 5847.828 - 5873.034: 22.3394% ( 155) 00:09:54.224 5873.034 - 5898.240: 23.4156% ( 197) 00:09:54.224 5898.240 - 5923.446: 24.3499% ( 171) 00:09:54.224 5923.446 - 5948.652: 25.3442% ( 182) 00:09:54.224 5948.652 - 5973.858: 26.3385% ( 182) 00:09:54.224 5973.858 - 5999.065: 27.3547% ( 186) 00:09:54.224 5999.065 - 6024.271: 28.3271% ( 178) 00:09:54.224 6024.271 - 6049.477: 29.4034% ( 197) 00:09:54.224 6049.477 - 6074.683: 30.3049% ( 165) 00:09:54.224 6074.683 - 6099.889: 31.3866% ( 198) 00:09:54.224 6099.889 - 6125.095: 32.3536% ( 177) 00:09:54.224 6125.095 - 6150.302: 33.3971% ( 191) 00:09:54.224 6150.302 - 6175.508: 34.3750% ( 179) 00:09:54.224 6175.508 - 6200.714: 35.4567% ( 198) 00:09:54.224 6200.714 - 6225.920: 36.4237% ( 177) 00:09:54.224 6225.920 - 6251.126: 37.5109% ( 199) 00:09:54.224 6251.126 - 6276.332: 38.5107% ( 183) 00:09:54.224 6276.332 - 6301.538: 39.5542% ( 191) 00:09:54.224 6301.538 - 6326.745: 40.5431% ( 181) 00:09:54.224 6326.745 - 6351.951: 41.6248% ( 198) 00:09:54.224 6351.951 - 6377.157: 42.5863% ( 176) 00:09:54.224 6377.157 - 6402.363: 43.6571% ( 196) 00:09:54.224 6402.363 - 6427.569: 44.6842% ( 188) 00:09:54.224 6427.569 - 6452.775: 45.6512% ( 177) 00:09:54.224 6452.775 - 6503.188: 47.7819% ( 390) 00:09:54.224 6503.188 - 6553.600: 49.9344% ( 394) 00:09:54.224 6553.600 - 6604.012: 51.9449% ( 368) 00:09:54.224 6604.012 - 6654.425: 53.9937% ( 375) 00:09:54.224 6654.425 - 6704.837: 56.0642% ( 379) 00:09:54.224 6704.837 - 6755.249: 58.1676% ( 385) 00:09:54.224 6755.249 - 6805.662: 60.2491% ( 381) 00:09:54.224 6805.662 - 6856.074: 62.3416% ( 383) 00:09:54.224 6856.074 - 6906.486: 64.4832% ( 392) 00:09:54.224 6906.486 - 6956.898: 66.5428% ( 377) 00:09:54.224 6956.898 - 7007.311: 68.6735% ( 390) 00:09:54.224 7007.311 - 7057.723: 70.6731% ( 366) 00:09:54.225 7057.723 - 7108.135: 72.6016% ( 353) 00:09:54.225 7108.135 - 7158.548: 74.4974% ( 347) 00:09:54.225 7158.548 - 7208.960: 76.2074% ( 313) 00:09:54.225 7208.960 - 7259.372: 77.6060% ( 256) 00:09:54.225 7259.372 - 7309.785: 78.8462% ( 227) 00:09:54.225 7309.785 - 7360.197: 79.7694% ( 169) 00:09:54.225 7360.197 - 7410.609: 80.4250% ( 120) 00:09:54.225 7410.609 - 7461.022: 80.9604% ( 98) 00:09:54.225 7461.022 - 7511.434: 81.4412% ( 88) 00:09:54.225 7511.434 - 7561.846: 81.8127% ( 68) 00:09:54.225 7561.846 - 7612.258: 82.2225% ( 75) 00:09:54.225 7612.258 - 7662.671: 82.5721% ( 64) 00:09:54.225 7662.671 - 7713.083: 82.9600% ( 71) 00:09:54.225 7713.083 - 7763.495: 83.3479% ( 71) 00:09:54.225 7763.495 - 7813.908: 83.6866% ( 62) 00:09:54.225 7813.908 - 7864.320: 84.0636% ( 69) 00:09:54.225 7864.320 - 7914.732: 84.4187% ( 65) 00:09:54.225 7914.732 - 7965.145: 84.7137% ( 54) 00:09:54.225 7965.145 - 8015.557: 84.9978% ( 52) 00:09:54.225 8015.557 - 8065.969: 85.2601% ( 48) 00:09:54.225 8065.969 - 8116.382: 85.4895% ( 42) 00:09:54.225 8116.382 - 8166.794: 85.6917% ( 37) 00:09:54.225 8166.794 - 8217.206: 85.8938% ( 37) 00:09:54.225 8217.206 - 8267.618: 86.0632% ( 31) 00:09:54.225 8267.618 - 8318.031: 86.2544% ( 35) 00:09:54.225 8318.031 - 8368.443: 86.4674% ( 39) 00:09:54.225 8368.443 - 8418.855: 86.6696% ( 37) 00:09:54.225 8418.855 - 8469.268: 86.8990% ( 42) 00:09:54.225 8469.268 - 8519.680: 87.1558% ( 47) 00:09:54.225 8519.680 - 8570.092: 87.3907% ( 43) 00:09:54.225 8570.092 - 8620.505: 87.6475% ( 47) 00:09:54.225 8620.505 - 8670.917: 87.8770% ( 42) 00:09:54.225 8670.917 - 8721.329: 88.1174% ( 44) 00:09:54.225 8721.329 - 8771.742: 88.3031% ( 34) 00:09:54.225 8771.742 - 8822.154: 88.5544% ( 46) 00:09:54.225 8822.154 - 8872.566: 88.7456% ( 35) 00:09:54.225 8872.566 - 8922.978: 88.9806% ( 43) 00:09:54.225 8922.978 - 8973.391: 89.2100% ( 42) 00:09:54.225 8973.391 - 9023.803: 89.4176% ( 38) 00:09:54.225 9023.803 - 9074.215: 89.6525% ( 43) 00:09:54.225 9074.215 - 9124.628: 89.8765% ( 41) 00:09:54.225 9124.628 - 9175.040: 90.0841% ( 38) 00:09:54.225 9175.040 - 9225.452: 90.2644% ( 33) 00:09:54.225 9225.452 - 9275.865: 90.4720% ( 38) 00:09:54.225 9275.865 - 9326.277: 90.6960% ( 41) 00:09:54.225 9326.277 - 9376.689: 90.8982% ( 37) 00:09:54.225 9376.689 - 9427.102: 91.1440% ( 45) 00:09:54.225 9427.102 - 9477.514: 91.3516% ( 38) 00:09:54.225 9477.514 - 9527.926: 91.5592% ( 38) 00:09:54.225 9527.926 - 9578.338: 91.7887% ( 42) 00:09:54.225 9578.338 - 9628.751: 92.0236% ( 43) 00:09:54.225 9628.751 - 9679.163: 92.2203% ( 36) 00:09:54.225 9679.163 - 9729.575: 92.4443% ( 41) 00:09:54.225 9729.575 - 9779.988: 92.6246% ( 33) 00:09:54.225 9779.988 - 9830.400: 92.8158% ( 35) 00:09:54.225 9830.400 - 9880.812: 93.0398% ( 41) 00:09:54.225 9880.812 - 9931.225: 93.2365% ( 36) 00:09:54.225 9931.225 - 9981.637: 93.4386% ( 37) 00:09:54.225 9981.637 - 10032.049: 93.6243% ( 34) 00:09:54.225 10032.049 - 10082.462: 93.8156% ( 35) 00:09:54.225 10082.462 - 10132.874: 94.0068% ( 35) 00:09:54.225 10132.874 - 10183.286: 94.2253% ( 40) 00:09:54.225 10183.286 - 10233.698: 94.4274% ( 37) 00:09:54.225 10233.698 - 10284.111: 94.6296% ( 37) 00:09:54.225 10284.111 - 10334.523: 94.8317% ( 37) 00:09:54.225 10334.523 - 10384.935: 95.0393% ( 38) 00:09:54.225 10384.935 - 10435.348: 95.2415% ( 37) 00:09:54.225 10435.348 - 10485.760: 95.3999% ( 29) 00:09:54.225 10485.760 - 10536.172: 95.5638% ( 30) 00:09:54.225 10536.172 - 10586.585: 95.6895% ( 23) 00:09:54.225 10586.585 - 10636.997: 95.8260% ( 25) 00:09:54.225 10636.997 - 10687.409: 95.9353% ( 20) 00:09:54.225 10687.409 - 10737.822: 96.0500% ( 21) 00:09:54.225 10737.822 - 10788.234: 96.1593% ( 20) 00:09:54.225 10788.234 - 10838.646: 96.2795% ( 22) 00:09:54.225 10838.646 - 10889.058: 96.3997% ( 22) 00:09:54.225 10889.058 - 10939.471: 96.5035% ( 19) 00:09:54.225 10939.471 - 10989.883: 96.6292% ( 23) 00:09:54.225 10989.883 - 11040.295: 96.7439% ( 21) 00:09:54.225 11040.295 - 11090.708: 96.8422% ( 18) 00:09:54.225 11090.708 - 11141.120: 96.9460% ( 19) 00:09:54.225 11141.120 - 11191.532: 97.0662% ( 22) 00:09:54.225 11191.532 - 11241.945: 97.1646% ( 18) 00:09:54.225 11241.945 - 11292.357: 97.2738% ( 20) 00:09:54.225 11292.357 - 11342.769: 97.3667% ( 17) 00:09:54.225 11342.769 - 11393.182: 97.4486% ( 15) 00:09:54.225 11393.182 - 11443.594: 97.5743% ( 23) 00:09:54.225 11443.594 - 11494.006: 97.6399% ( 12) 00:09:54.225 11494.006 - 11544.418: 97.6890% ( 9) 00:09:54.225 11544.418 - 11594.831: 97.7546% ( 12) 00:09:54.225 11594.831 - 11645.243: 97.8147% ( 11) 00:09:54.225 11645.243 - 11695.655: 97.8802% ( 12) 00:09:54.225 11695.655 - 11746.068: 97.9294% ( 9) 00:09:54.225 11746.068 - 11796.480: 97.9786% ( 9) 00:09:54.225 11796.480 - 11846.892: 98.0168% ( 7) 00:09:54.225 11846.892 - 11897.305: 98.0605% ( 8) 00:09:54.225 11897.305 - 11947.717: 98.0933% ( 6) 00:09:54.225 11947.717 - 11998.129: 98.1479% ( 10) 00:09:54.225 11998.129 - 12048.542: 98.1917% ( 8) 00:09:54.225 12048.542 - 12098.954: 98.2408% ( 9) 00:09:54.225 12098.954 - 12149.366: 98.2900% ( 9) 00:09:54.225 12149.366 - 12199.778: 98.3446% ( 10) 00:09:54.225 12199.778 - 12250.191: 98.3610% ( 3) 00:09:54.225 12250.191 - 12300.603: 98.4047% ( 8) 00:09:54.225 12300.603 - 12351.015: 98.4320% ( 5) 00:09:54.225 12351.015 - 12401.428: 98.4648% ( 6) 00:09:54.225 12401.428 - 12451.840: 98.4976% ( 6) 00:09:54.225 12451.840 - 12502.252: 98.5194% ( 4) 00:09:54.225 12502.252 - 12552.665: 98.5577% ( 7) 00:09:54.225 12552.665 - 12603.077: 98.5741% ( 3) 00:09:54.225 12603.077 - 12653.489: 98.5959% ( 4) 00:09:54.225 12653.489 - 12703.902: 98.6233% ( 5) 00:09:54.225 12703.902 - 12754.314: 98.6396% ( 3) 00:09:54.225 12754.314 - 12804.726: 98.6670% ( 5) 00:09:54.225 12804.726 - 12855.138: 98.6888% ( 4) 00:09:54.225 12855.138 - 12905.551: 98.7052% ( 3) 00:09:54.225 12905.551 - 13006.375: 98.7380% ( 6) 00:09:54.225 13006.375 - 13107.200: 98.7544% ( 3) 00:09:54.225 13107.200 - 13208.025: 98.7653% ( 2) 00:09:54.225 13208.025 - 13308.849: 98.7762% ( 2) 00:09:54.225 13308.849 - 13409.674: 98.7981% ( 4) 00:09:54.225 13409.674 - 13510.498: 98.8035% ( 1) 00:09:54.225 13510.498 - 13611.323: 98.8145% ( 2) 00:09:54.225 13611.323 - 13712.148: 98.8309% ( 3) 00:09:54.225 13712.148 - 13812.972: 98.8418% ( 2) 00:09:54.225 13812.972 - 13913.797: 98.8527% ( 2) 00:09:54.225 13913.797 - 14014.622: 98.8691% ( 3) 00:09:54.225 14014.622 - 14115.446: 98.8855% ( 3) 00:09:54.225 14115.446 - 14216.271: 98.8910% ( 1) 00:09:54.225 14216.271 - 14317.095: 98.9073% ( 3) 00:09:54.225 14317.095 - 14417.920: 98.9183% ( 2) 00:09:54.225 14417.920 - 14518.745: 98.9347% ( 3) 00:09:54.225 14518.745 - 14619.569: 98.9456% ( 2) 00:09:54.225 14619.569 - 14720.394: 98.9565% ( 2) 00:09:54.225 14720.394 - 14821.218: 98.9784% ( 4) 00:09:54.225 14922.043 - 15022.868: 98.9948% ( 3) 00:09:54.225 15022.868 - 15123.692: 99.0111% ( 3) 00:09:54.225 15123.692 - 15224.517: 99.0166% ( 1) 00:09:54.225 15224.517 - 15325.342: 99.0385% ( 4) 00:09:54.225 15325.342 - 15426.166: 99.0494% ( 2) 00:09:54.225 15426.166 - 15526.991: 99.0658% ( 3) 00:09:54.225 15526.991 - 15627.815: 99.0712% ( 1) 00:09:54.225 15627.815 - 15728.640: 99.0876% ( 3) 00:09:54.225 15728.640 - 15829.465: 99.1040% ( 3) 00:09:54.225 15829.465 - 15930.289: 99.1149% ( 2) 00:09:54.225 15930.289 - 16031.114: 99.1204% ( 1) 00:09:54.225 16031.114 - 16131.938: 99.1368% ( 3) 00:09:54.225 16131.938 - 16232.763: 99.1477% ( 2) 00:09:54.225 16232.763 - 16333.588: 99.1696% ( 4) 00:09:54.225 16333.588 - 16434.412: 99.1750% ( 1) 00:09:54.225 16434.412 - 16535.237: 99.1805% ( 1) 00:09:54.225 16535.237 - 16636.062: 99.2024% ( 4) 00:09:54.226 16636.062 - 16736.886: 99.2188% ( 3) 00:09:54.226 16837.711 - 16938.535: 99.2351% ( 3) 00:09:54.226 16938.535 - 17039.360: 99.2570% ( 4) 00:09:54.226 17039.360 - 17140.185: 99.2625% ( 1) 00:09:54.226 17140.185 - 17241.009: 99.2734% ( 2) 00:09:54.226 17241.009 - 17341.834: 99.2898% ( 3) 00:09:54.226 17341.834 - 17442.658: 99.3007% ( 2) 00:09:54.226 20265.748 - 20366.572: 99.3062% ( 1) 00:09:54.226 20366.572 - 20467.397: 99.3171% ( 2) 00:09:54.226 20467.397 - 20568.222: 99.3280% ( 2) 00:09:54.226 20568.222 - 20669.046: 99.3389% ( 2) 00:09:54.226 20669.046 - 20769.871: 99.3608% ( 4) 00:09:54.226 20769.871 - 20870.695: 99.3826% ( 4) 00:09:54.226 20870.695 - 20971.520: 99.3990% ( 3) 00:09:54.226 20971.520 - 21072.345: 99.4209% ( 4) 00:09:54.226 21072.345 - 21173.169: 99.4427% ( 4) 00:09:54.226 21173.169 - 21273.994: 99.4646% ( 4) 00:09:54.226 21273.994 - 21374.818: 99.4810% ( 3) 00:09:54.226 21374.818 - 21475.643: 99.5083% ( 5) 00:09:54.226 21475.643 - 21576.468: 99.5302% ( 4) 00:09:54.226 21576.468 - 21677.292: 99.5520% ( 4) 00:09:54.226 21677.292 - 21778.117: 99.5684% ( 3) 00:09:54.226 21778.117 - 21878.942: 99.5903% ( 4) 00:09:54.226 21878.942 - 21979.766: 99.6121% ( 4) 00:09:54.226 21979.766 - 22080.591: 99.6285% ( 3) 00:09:54.226 22080.591 - 22181.415: 99.6503% ( 4) 00:09:54.226 22181.415 - 22282.240: 99.6722% ( 4) 00:09:54.226 22282.240 - 22383.065: 99.6941% ( 4) 00:09:54.226 22383.065 - 22483.889: 99.7159% ( 4) 00:09:54.226 22483.889 - 22584.714: 99.7378% ( 4) 00:09:54.226 22584.714 - 22685.538: 99.7542% ( 3) 00:09:54.226 22685.538 - 22786.363: 99.7705% ( 3) 00:09:54.226 22786.363 - 22887.188: 99.7924% ( 4) 00:09:54.226 22887.188 - 22988.012: 99.8088% ( 3) 00:09:54.226 22988.012 - 23088.837: 99.8306% ( 4) 00:09:54.226 23088.837 - 23189.662: 99.8470% ( 3) 00:09:54.226 23189.662 - 23290.486: 99.8689% ( 4) 00:09:54.226 23290.486 - 23391.311: 99.8907% ( 4) 00:09:54.226 23391.311 - 23492.135: 99.9071% ( 3) 00:09:54.226 23492.135 - 23592.960: 99.9235% ( 3) 00:09:54.226 23592.960 - 23693.785: 99.9508% ( 5) 00:09:54.226 23693.785 - 23794.609: 99.9672% ( 3) 00:09:54.226 23794.609 - 23895.434: 99.9891% ( 4) 00:09:54.226 23895.434 - 23996.258: 100.0000% ( 2) 00:09:54.226 00:09:54.226 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:54.226 ============================================================================== 00:09:54.226 Range in us Cumulative IO count 00:09:54.226 5192.468 - 5217.674: 0.0219% ( 4) 00:09:54.226 5217.674 - 5242.880: 0.1147% ( 17) 00:09:54.226 5242.880 - 5268.086: 0.2295% ( 21) 00:09:54.226 5268.086 - 5293.292: 0.4043% ( 32) 00:09:54.226 5293.292 - 5318.498: 0.6720% ( 49) 00:09:54.226 5318.498 - 5343.705: 1.0271% ( 65) 00:09:54.226 5343.705 - 5368.911: 1.5079% ( 88) 00:09:54.226 5368.911 - 5394.117: 1.8794% ( 68) 00:09:54.226 5394.117 - 5419.323: 2.3492% ( 86) 00:09:54.226 5419.323 - 5444.529: 2.9830% ( 116) 00:09:54.226 5444.529 - 5469.735: 3.6058% ( 114) 00:09:54.226 5469.735 - 5494.942: 4.3979% ( 145) 00:09:54.226 5494.942 - 5520.148: 5.2010% ( 147) 00:09:54.226 5520.148 - 5545.354: 5.9604% ( 139) 00:09:54.226 5545.354 - 5570.560: 6.8564% ( 164) 00:09:54.226 5570.560 - 5595.766: 7.8453% ( 181) 00:09:54.226 5595.766 - 5620.972: 8.8341% ( 181) 00:09:54.226 5620.972 - 5646.178: 9.8885% ( 193) 00:09:54.226 5646.178 - 5671.385: 10.8829% ( 182) 00:09:54.226 5671.385 - 5696.591: 12.0138% ( 207) 00:09:54.226 5696.591 - 5721.797: 13.0791% ( 195) 00:09:54.226 5721.797 - 5747.003: 14.1663% ( 199) 00:09:54.226 5747.003 - 5772.209: 15.2644% ( 201) 00:09:54.226 5772.209 - 5797.415: 16.3407% ( 197) 00:09:54.226 5797.415 - 5822.622: 17.4006% ( 194) 00:09:54.226 5822.622 - 5847.828: 18.4932% ( 200) 00:09:54.226 5847.828 - 5873.034: 19.6460% ( 211) 00:09:54.226 5873.034 - 5898.240: 20.7550% ( 203) 00:09:54.226 5898.240 - 5923.446: 21.8313% ( 197) 00:09:54.226 5923.446 - 5948.652: 22.9349% ( 202) 00:09:54.226 5948.652 - 5973.858: 24.0439% ( 203) 00:09:54.226 5973.858 - 5999.065: 25.1366% ( 200) 00:09:54.226 5999.065 - 6024.271: 26.2402% ( 202) 00:09:54.226 6024.271 - 6049.477: 27.3875% ( 210) 00:09:54.226 6049.477 - 6074.683: 28.5074% ( 205) 00:09:54.226 6074.683 - 6099.889: 29.6383% ( 207) 00:09:54.226 6099.889 - 6125.095: 30.8293% ( 218) 00:09:54.226 6125.095 - 6150.302: 31.9766% ( 210) 00:09:54.226 6150.302 - 6175.508: 33.1130% ( 208) 00:09:54.226 6175.508 - 6200.714: 34.2657% ( 211) 00:09:54.226 6200.714 - 6225.920: 35.4130% ( 210) 00:09:54.226 6225.920 - 6251.126: 36.5986% ( 217) 00:09:54.226 6251.126 - 6276.332: 37.7950% ( 219) 00:09:54.226 6276.332 - 6301.538: 38.9532% ( 212) 00:09:54.226 6301.538 - 6326.745: 40.1333% ( 216) 00:09:54.226 6326.745 - 6351.951: 41.3298% ( 219) 00:09:54.226 6351.951 - 6377.157: 42.5645% ( 226) 00:09:54.226 6377.157 - 6402.363: 43.7664% ( 220) 00:09:54.226 6402.363 - 6427.569: 44.9738% ( 221) 00:09:54.226 6427.569 - 6452.775: 46.2139% ( 227) 00:09:54.226 6452.775 - 6503.188: 48.6342% ( 443) 00:09:54.226 6503.188 - 6553.600: 51.0271% ( 438) 00:09:54.226 6553.600 - 6604.012: 53.4856% ( 450) 00:09:54.226 6604.012 - 6654.425: 55.9331% ( 448) 00:09:54.226 6654.425 - 6704.837: 58.3752% ( 447) 00:09:54.226 6704.837 - 6755.249: 60.8337% ( 450) 00:09:54.226 6755.249 - 6805.662: 63.3086% ( 453) 00:09:54.226 6805.662 - 6856.074: 65.7343% ( 444) 00:09:54.226 6856.074 - 6906.486: 68.0889% ( 431) 00:09:54.226 6906.486 - 6956.898: 70.3289% ( 410) 00:09:54.226 6956.898 - 7007.311: 72.3885% ( 377) 00:09:54.226 7007.311 - 7057.723: 74.2952% ( 349) 00:09:54.226 7057.723 - 7108.135: 75.9397% ( 301) 00:09:54.226 7108.135 - 7158.548: 77.3164% ( 252) 00:09:54.226 7158.548 - 7208.960: 78.4528% ( 208) 00:09:54.226 7208.960 - 7259.372: 79.2286% ( 142) 00:09:54.226 7259.372 - 7309.785: 79.8350% ( 111) 00:09:54.226 7309.785 - 7360.197: 80.3267% ( 90) 00:09:54.226 7360.197 - 7410.609: 80.8184% ( 90) 00:09:54.226 7410.609 - 7461.022: 81.2555% ( 80) 00:09:54.226 7461.022 - 7511.434: 81.6434% ( 71) 00:09:54.226 7511.434 - 7561.846: 82.0367% ( 72) 00:09:54.226 7561.846 - 7612.258: 82.4082% ( 68) 00:09:54.226 7612.258 - 7662.671: 82.7688% ( 66) 00:09:54.226 7662.671 - 7713.083: 83.0966% ( 60) 00:09:54.226 7713.083 - 7763.495: 83.4353% ( 62) 00:09:54.226 7763.495 - 7813.908: 83.7413% ( 56) 00:09:54.226 7813.908 - 7864.320: 84.0527% ( 57) 00:09:54.226 7864.320 - 7914.732: 84.3531% ( 55) 00:09:54.226 7914.732 - 7965.145: 84.6099% ( 47) 00:09:54.226 7965.145 - 8015.557: 84.8612% ( 46) 00:09:54.226 8015.557 - 8065.969: 85.1071% ( 45) 00:09:54.226 8065.969 - 8116.382: 85.3475% ( 44) 00:09:54.226 8116.382 - 8166.794: 85.5824% ( 43) 00:09:54.226 8166.794 - 8217.206: 85.8064% ( 41) 00:09:54.226 8217.206 - 8267.618: 86.0304% ( 41) 00:09:54.226 8267.618 - 8318.031: 86.2489% ( 40) 00:09:54.226 8318.031 - 8368.443: 86.4620% ( 39) 00:09:54.226 8368.443 - 8418.855: 86.6532% ( 35) 00:09:54.226 8418.855 - 8469.268: 86.8717% ( 40) 00:09:54.226 8469.268 - 8519.680: 87.0684% ( 36) 00:09:54.226 8519.680 - 8570.092: 87.2705% ( 37) 00:09:54.226 8570.092 - 8620.505: 87.4727% ( 37) 00:09:54.226 8620.505 - 8670.917: 87.6694% ( 36) 00:09:54.226 8670.917 - 8721.329: 87.8770% ( 38) 00:09:54.226 8721.329 - 8771.742: 88.1119% ( 43) 00:09:54.226 8771.742 - 8822.154: 88.4561% ( 63) 00:09:54.226 8822.154 - 8872.566: 88.6965% ( 44) 00:09:54.226 8872.566 - 8922.978: 88.9150% ( 40) 00:09:54.226 8922.978 - 8973.391: 89.1608% ( 45) 00:09:54.226 8973.391 - 9023.803: 89.4012% ( 44) 00:09:54.226 9023.803 - 9074.215: 89.6689% ( 49) 00:09:54.227 9074.215 - 9124.628: 89.9148% ( 45) 00:09:54.227 9124.628 - 9175.040: 90.1770% ( 48) 00:09:54.227 9175.040 - 9225.452: 90.4338% ( 47) 00:09:54.227 9225.452 - 9275.865: 90.6906% ( 47) 00:09:54.227 9275.865 - 9326.277: 90.9364% ( 45) 00:09:54.227 9326.277 - 9376.689: 91.1659% ( 42) 00:09:54.227 9376.689 - 9427.102: 91.4008% ( 43) 00:09:54.227 9427.102 - 9477.514: 91.6466% ( 45) 00:09:54.227 9477.514 - 9527.926: 91.8870% ( 44) 00:09:54.227 9527.926 - 9578.338: 92.0946% ( 38) 00:09:54.227 9578.338 - 9628.751: 92.2968% ( 37) 00:09:54.227 9628.751 - 9679.163: 92.4934% ( 36) 00:09:54.227 9679.163 - 9729.575: 92.6956% ( 37) 00:09:54.227 9729.575 - 9779.988: 92.8704% ( 32) 00:09:54.227 9779.988 - 9830.400: 93.0562% ( 34) 00:09:54.227 9830.400 - 9880.812: 93.2310% ( 32) 00:09:54.227 9880.812 - 9931.225: 93.4167% ( 34) 00:09:54.227 9931.225 - 9981.637: 93.6243% ( 38) 00:09:54.227 9981.637 - 10032.049: 93.8483% ( 41) 00:09:54.227 10032.049 - 10082.462: 94.0723% ( 41) 00:09:54.227 10082.462 - 10132.874: 94.2854% ( 39) 00:09:54.227 10132.874 - 10183.286: 94.4985% ( 39) 00:09:54.227 10183.286 - 10233.698: 94.7225% ( 41) 00:09:54.227 10233.698 - 10284.111: 94.9574% ( 43) 00:09:54.227 10284.111 - 10334.523: 95.1868% ( 42) 00:09:54.227 10334.523 - 10384.935: 95.3781% ( 35) 00:09:54.227 10384.935 - 10435.348: 95.5583% ( 33) 00:09:54.227 10435.348 - 10485.760: 95.7168% ( 29) 00:09:54.227 10485.760 - 10536.172: 95.8588% ( 26) 00:09:54.227 10536.172 - 10586.585: 95.9736% ( 21) 00:09:54.227 10586.585 - 10636.997: 96.0883% ( 21) 00:09:54.227 10636.997 - 10687.409: 96.1921% ( 19) 00:09:54.227 10687.409 - 10737.822: 96.2850% ( 17) 00:09:54.227 10737.822 - 10788.234: 96.3778% ( 17) 00:09:54.227 10788.234 - 10838.646: 96.4653% ( 16) 00:09:54.227 10838.646 - 10889.058: 96.5472% ( 15) 00:09:54.227 10889.058 - 10939.471: 96.6401% ( 17) 00:09:54.227 10939.471 - 10989.883: 96.7330% ( 17) 00:09:54.227 10989.883 - 11040.295: 96.8258% ( 17) 00:09:54.227 11040.295 - 11090.708: 96.9296% ( 19) 00:09:54.227 11090.708 - 11141.120: 97.0444% ( 21) 00:09:54.227 11141.120 - 11191.532: 97.1427% ( 18) 00:09:54.227 11191.532 - 11241.945: 97.2520% ( 20) 00:09:54.227 11241.945 - 11292.357: 97.3558% ( 19) 00:09:54.227 11292.357 - 11342.769: 97.4705% ( 21) 00:09:54.227 11342.769 - 11393.182: 97.5470% ( 14) 00:09:54.227 11393.182 - 11443.594: 97.6399% ( 17) 00:09:54.227 11443.594 - 11494.006: 97.7218% ( 15) 00:09:54.227 11494.006 - 11544.418: 97.7983% ( 14) 00:09:54.227 11544.418 - 11594.831: 97.8912% ( 17) 00:09:54.227 11594.831 - 11645.243: 97.9622% ( 13) 00:09:54.227 11645.243 - 11695.655: 98.0278% ( 12) 00:09:54.227 11695.655 - 11746.068: 98.0933% ( 12) 00:09:54.227 11746.068 - 11796.480: 98.1534% ( 11) 00:09:54.227 11796.480 - 11846.892: 98.2190% ( 12) 00:09:54.227 11846.892 - 11897.305: 98.2627% ( 8) 00:09:54.227 11897.305 - 11947.717: 98.2845% ( 4) 00:09:54.227 11947.717 - 11998.129: 98.3064% ( 4) 00:09:54.227 11998.129 - 12048.542: 98.3282% ( 4) 00:09:54.227 12048.542 - 12098.954: 98.3446% ( 3) 00:09:54.227 12098.954 - 12149.366: 98.3665% ( 4) 00:09:54.227 12149.366 - 12199.778: 98.3829% ( 3) 00:09:54.227 12199.778 - 12250.191: 98.4047% ( 4) 00:09:54.227 12250.191 - 12300.603: 98.4211% ( 3) 00:09:54.227 12300.603 - 12351.015: 98.4484% ( 5) 00:09:54.227 12351.015 - 12401.428: 98.4757% ( 5) 00:09:54.227 12401.428 - 12451.840: 98.5085% ( 6) 00:09:54.227 12451.840 - 12502.252: 98.5358% ( 5) 00:09:54.227 12502.252 - 12552.665: 98.5577% ( 4) 00:09:54.227 12552.665 - 12603.077: 98.5850% ( 5) 00:09:54.227 12603.077 - 12653.489: 98.6178% ( 6) 00:09:54.227 12653.489 - 12703.902: 98.6451% ( 5) 00:09:54.227 12703.902 - 12754.314: 98.6724% ( 5) 00:09:54.227 12754.314 - 12804.726: 98.6833% ( 2) 00:09:54.227 12804.726 - 12855.138: 98.6888% ( 1) 00:09:54.227 12855.138 - 12905.551: 98.6943% ( 1) 00:09:54.227 12905.551 - 13006.375: 98.7107% ( 3) 00:09:54.227 13006.375 - 13107.200: 98.7271% ( 3) 00:09:54.227 13107.200 - 13208.025: 98.7380% ( 2) 00:09:54.227 13208.025 - 13308.849: 98.7544% ( 3) 00:09:54.227 13308.849 - 13409.674: 98.7708% ( 3) 00:09:54.227 13409.674 - 13510.498: 98.7872% ( 3) 00:09:54.227 13510.498 - 13611.323: 98.8035% ( 3) 00:09:54.227 13611.323 - 13712.148: 98.8199% ( 3) 00:09:54.227 13712.148 - 13812.972: 98.8309% ( 2) 00:09:54.227 13812.972 - 13913.797: 98.8472% ( 3) 00:09:54.227 13913.797 - 14014.622: 98.8636% ( 3) 00:09:54.227 14014.622 - 14115.446: 98.8800% ( 3) 00:09:54.227 14115.446 - 14216.271: 98.8964% ( 3) 00:09:54.227 14216.271 - 14317.095: 98.9128% ( 3) 00:09:54.227 14317.095 - 14417.920: 98.9237% ( 2) 00:09:54.227 14417.920 - 14518.745: 98.9401% ( 3) 00:09:54.227 14518.745 - 14619.569: 98.9565% ( 3) 00:09:54.227 14619.569 - 14720.394: 98.9729% ( 3) 00:09:54.227 14720.394 - 14821.218: 98.9893% ( 3) 00:09:54.227 14821.218 - 14922.043: 99.0002% ( 2) 00:09:54.227 14922.043 - 15022.868: 99.0166% ( 3) 00:09:54.227 15022.868 - 15123.692: 99.0330% ( 3) 00:09:54.227 15123.692 - 15224.517: 99.0439% ( 2) 00:09:54.227 15224.517 - 15325.342: 99.0603% ( 3) 00:09:54.227 15325.342 - 15426.166: 99.0767% ( 3) 00:09:54.227 15426.166 - 15526.991: 99.0931% ( 3) 00:09:54.227 15526.991 - 15627.815: 99.1040% ( 2) 00:09:54.227 15627.815 - 15728.640: 99.1204% ( 3) 00:09:54.227 15728.640 - 15829.465: 99.1368% ( 3) 00:09:54.227 15829.465 - 15930.289: 99.1532% ( 3) 00:09:54.227 15930.289 - 16031.114: 99.1641% ( 2) 00:09:54.227 16031.114 - 16131.938: 99.1805% ( 3) 00:09:54.227 16131.938 - 16232.763: 99.1914% ( 2) 00:09:54.227 16232.763 - 16333.588: 99.2078% ( 3) 00:09:54.227 16333.588 - 16434.412: 99.2242% ( 3) 00:09:54.227 16434.412 - 16535.237: 99.2406% ( 3) 00:09:54.227 16535.237 - 16636.062: 99.2570% ( 3) 00:09:54.227 16636.062 - 16736.886: 99.2734% ( 3) 00:09:54.227 16736.886 - 16837.711: 99.2843% ( 2) 00:09:54.227 16837.711 - 16938.535: 99.3007% ( 3) 00:09:54.227 19559.975 - 19660.800: 99.3062% ( 1) 00:09:54.227 19660.800 - 19761.625: 99.3280% ( 4) 00:09:54.227 19761.625 - 19862.449: 99.3444% ( 3) 00:09:54.227 19862.449 - 19963.274: 99.3663% ( 4) 00:09:54.227 19963.274 - 20064.098: 99.3826% ( 3) 00:09:54.227 20064.098 - 20164.923: 99.4045% ( 4) 00:09:54.227 20164.923 - 20265.748: 99.4209% ( 3) 00:09:54.227 20265.748 - 20366.572: 99.4427% ( 4) 00:09:54.227 20366.572 - 20467.397: 99.4646% ( 4) 00:09:54.227 20467.397 - 20568.222: 99.4810% ( 3) 00:09:54.227 20568.222 - 20669.046: 99.4974% ( 3) 00:09:54.227 20669.046 - 20769.871: 99.5192% ( 4) 00:09:54.227 20769.871 - 20870.695: 99.5411% ( 4) 00:09:54.227 20870.695 - 20971.520: 99.5575% ( 3) 00:09:54.227 20971.520 - 21072.345: 99.5793% ( 4) 00:09:54.227 21072.345 - 21173.169: 99.6012% ( 4) 00:09:54.227 21173.169 - 21273.994: 99.6176% ( 3) 00:09:54.227 21273.994 - 21374.818: 99.6394% ( 4) 00:09:54.227 21374.818 - 21475.643: 99.6613% ( 4) 00:09:54.227 21475.643 - 21576.468: 99.6777% ( 3) 00:09:54.227 21576.468 - 21677.292: 99.6941% ( 3) 00:09:54.227 21677.292 - 21778.117: 99.7104% ( 3) 00:09:54.227 21778.117 - 21878.942: 99.7323% ( 4) 00:09:54.227 21878.942 - 21979.766: 99.7487% ( 3) 00:09:54.227 21979.766 - 22080.591: 99.7705% ( 4) 00:09:54.227 22080.591 - 22181.415: 99.7924% ( 4) 00:09:54.227 22181.415 - 22282.240: 99.8088% ( 3) 00:09:54.227 22282.240 - 22383.065: 99.8306% ( 4) 00:09:54.227 22383.065 - 22483.889: 99.8525% ( 4) 00:09:54.227 22483.889 - 22584.714: 99.8743% ( 4) 00:09:54.227 22584.714 - 22685.538: 99.8907% ( 3) 00:09:54.227 22685.538 - 22786.363: 99.9126% ( 4) 00:09:54.227 22786.363 - 22887.188: 99.9290% ( 3) 00:09:54.227 22887.188 - 22988.012: 99.9508% ( 4) 00:09:54.227 22988.012 - 23088.837: 99.9727% ( 4) 00:09:54.228 23088.837 - 23189.662: 99.9891% ( 3) 00:09:54.228 23189.662 - 23290.486: 100.0000% ( 2) 00:09:54.228 00:09:54.228 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:54.228 ============================================================================== 00:09:54.228 Range in us Cumulative IO count 00:09:54.228 5142.055 - 5167.262: 0.0164% ( 3) 00:09:54.228 5167.262 - 5192.468: 0.0492% ( 6) 00:09:54.228 5192.468 - 5217.674: 0.0710% ( 4) 00:09:54.228 5217.674 - 5242.880: 0.1584% ( 16) 00:09:54.228 5242.880 - 5268.086: 0.2841% ( 23) 00:09:54.228 5268.086 - 5293.292: 0.4917% ( 38) 00:09:54.228 5293.292 - 5318.498: 0.6665% ( 32) 00:09:54.228 5318.498 - 5343.705: 1.0107% ( 63) 00:09:54.228 5343.705 - 5368.911: 1.3440% ( 61) 00:09:54.228 5368.911 - 5394.117: 1.7810% ( 80) 00:09:54.228 5394.117 - 5419.323: 2.3164% ( 98) 00:09:54.228 5419.323 - 5444.529: 2.8955% ( 106) 00:09:54.228 5444.529 - 5469.735: 3.4583% ( 103) 00:09:54.228 5469.735 - 5494.942: 4.2340% ( 142) 00:09:54.228 5494.942 - 5520.148: 5.1355% ( 165) 00:09:54.228 5520.148 - 5545.354: 6.0424% ( 166) 00:09:54.228 5545.354 - 5570.560: 6.9220% ( 161) 00:09:54.228 5570.560 - 5595.766: 7.8999% ( 179) 00:09:54.228 5595.766 - 5620.972: 8.9325% ( 189) 00:09:54.228 5620.972 - 5646.178: 9.8612% ( 170) 00:09:54.228 5646.178 - 5671.385: 10.8501% ( 181) 00:09:54.228 5671.385 - 5696.591: 11.8499% ( 183) 00:09:54.228 5696.591 - 5721.797: 12.9316% ( 198) 00:09:54.228 5721.797 - 5747.003: 13.9806% ( 192) 00:09:54.228 5747.003 - 5772.209: 15.0404% ( 194) 00:09:54.228 5772.209 - 5797.415: 16.0730% ( 189) 00:09:54.228 5797.415 - 5822.622: 17.1219% ( 192) 00:09:54.228 5822.622 - 5847.828: 18.2037% ( 198) 00:09:54.228 5847.828 - 5873.034: 19.3073% ( 202) 00:09:54.228 5873.034 - 5898.240: 20.4108% ( 202) 00:09:54.228 5898.240 - 5923.446: 21.5363% ( 206) 00:09:54.228 5923.446 - 5948.652: 22.6672% ( 207) 00:09:54.228 5948.652 - 5973.858: 23.7817% ( 204) 00:09:54.228 5973.858 - 5999.065: 24.9126% ( 207) 00:09:54.228 5999.065 - 6024.271: 26.0162% ( 202) 00:09:54.228 6024.271 - 6049.477: 27.1525% ( 208) 00:09:54.228 6049.477 - 6074.683: 28.2834% ( 207) 00:09:54.228 6074.683 - 6099.889: 29.4362% ( 211) 00:09:54.228 6099.889 - 6125.095: 30.5944% ( 212) 00:09:54.228 6125.095 - 6150.302: 31.7526% ( 212) 00:09:54.228 6150.302 - 6175.508: 32.9709% ( 223) 00:09:54.228 6175.508 - 6200.714: 34.1346% ( 213) 00:09:54.228 6200.714 - 6225.920: 35.3038% ( 214) 00:09:54.228 6225.920 - 6251.126: 36.4620% ( 212) 00:09:54.228 6251.126 - 6276.332: 37.6584% ( 219) 00:09:54.228 6276.332 - 6301.538: 38.8604% ( 220) 00:09:54.228 6301.538 - 6326.745: 40.0404% ( 216) 00:09:54.228 6326.745 - 6351.951: 41.2314% ( 218) 00:09:54.228 6351.951 - 6377.157: 42.4497% ( 223) 00:09:54.228 6377.157 - 6402.363: 43.6407% ( 218) 00:09:54.228 6402.363 - 6427.569: 44.8590% ( 223) 00:09:54.228 6427.569 - 6452.775: 46.0555% ( 219) 00:09:54.228 6452.775 - 6503.188: 48.4757% ( 443) 00:09:54.228 6503.188 - 6553.600: 50.9014% ( 444) 00:09:54.228 6553.600 - 6604.012: 53.3763% ( 453) 00:09:54.228 6604.012 - 6654.425: 55.8075% ( 445) 00:09:54.228 6654.425 - 6704.837: 58.2496% ( 447) 00:09:54.228 6704.837 - 6755.249: 60.6425% ( 438) 00:09:54.228 6755.249 - 6805.662: 63.0955% ( 449) 00:09:54.228 6805.662 - 6856.074: 65.5103% ( 442) 00:09:54.228 6856.074 - 6906.486: 67.8813% ( 434) 00:09:54.228 6906.486 - 6956.898: 70.1267% ( 411) 00:09:54.228 6956.898 - 7007.311: 72.2793% ( 394) 00:09:54.228 7007.311 - 7057.723: 74.2570% ( 362) 00:09:54.228 7057.723 - 7108.135: 75.9670% ( 313) 00:09:54.228 7108.135 - 7158.548: 77.3438% ( 252) 00:09:54.228 7158.548 - 7208.960: 78.4309% ( 199) 00:09:54.228 7208.960 - 7259.372: 79.2941% ( 158) 00:09:54.228 7259.372 - 7309.785: 79.9443% ( 119) 00:09:54.228 7309.785 - 7360.197: 80.5070% ( 103) 00:09:54.228 7360.197 - 7410.609: 81.0588% ( 101) 00:09:54.228 7410.609 - 7461.022: 81.5286% ( 86) 00:09:54.228 7461.022 - 7511.434: 82.0203% ( 90) 00:09:54.228 7511.434 - 7561.846: 82.4683% ( 82) 00:09:54.228 7561.846 - 7612.258: 82.8890% ( 77) 00:09:54.228 7612.258 - 7662.671: 83.2878% ( 73) 00:09:54.228 7662.671 - 7713.083: 83.6648% ( 69) 00:09:54.228 7713.083 - 7763.495: 84.0199% ( 65) 00:09:54.228 7763.495 - 7813.908: 84.3477% ( 60) 00:09:54.228 7813.908 - 7864.320: 84.6318% ( 52) 00:09:54.228 7864.320 - 7914.732: 84.8940% ( 48) 00:09:54.228 7914.732 - 7965.145: 85.1890% ( 54) 00:09:54.228 7965.145 - 8015.557: 85.4458% ( 47) 00:09:54.228 8015.557 - 8065.969: 85.7408% ( 54) 00:09:54.228 8065.969 - 8116.382: 85.9812% ( 44) 00:09:54.228 8116.382 - 8166.794: 86.2161% ( 43) 00:09:54.228 8166.794 - 8217.206: 86.4510% ( 43) 00:09:54.228 8217.206 - 8267.618: 86.6914% ( 44) 00:09:54.228 8267.618 - 8318.031: 86.9482% ( 47) 00:09:54.228 8318.031 - 8368.443: 87.1831% ( 43) 00:09:54.228 8368.443 - 8418.855: 87.4344% ( 46) 00:09:54.228 8418.855 - 8469.268: 87.6748% ( 44) 00:09:54.228 8469.268 - 8519.680: 87.8934% ( 40) 00:09:54.228 8519.680 - 8570.092: 88.0846% ( 35) 00:09:54.228 8570.092 - 8620.505: 88.2867% ( 37) 00:09:54.228 8620.505 - 8670.917: 88.4670% ( 33) 00:09:54.228 8670.917 - 8721.329: 88.6528% ( 34) 00:09:54.228 8721.329 - 8771.742: 88.8440% ( 35) 00:09:54.228 8771.742 - 8822.154: 89.0297% ( 34) 00:09:54.228 8822.154 - 8872.566: 89.2155% ( 34) 00:09:54.228 8872.566 - 8922.978: 89.4067% ( 35) 00:09:54.228 8922.978 - 8973.391: 89.5870% ( 33) 00:09:54.228 8973.391 - 9023.803: 89.7837% ( 36) 00:09:54.228 9023.803 - 9074.215: 89.9967% ( 39) 00:09:54.228 9074.215 - 9124.628: 90.2262% ( 42) 00:09:54.228 9124.628 - 9175.040: 90.4502% ( 41) 00:09:54.228 9175.040 - 9225.452: 90.6578% ( 38) 00:09:54.228 9225.452 - 9275.865: 90.8708% ( 39) 00:09:54.228 9275.865 - 9326.277: 91.0839% ( 39) 00:09:54.228 9326.277 - 9376.689: 91.2861% ( 37) 00:09:54.228 9376.689 - 9427.102: 91.4937% ( 38) 00:09:54.228 9427.102 - 9477.514: 91.7013% ( 38) 00:09:54.228 9477.514 - 9527.926: 91.9143% ( 39) 00:09:54.228 9527.926 - 9578.338: 92.1329% ( 40) 00:09:54.228 9578.338 - 9628.751: 92.3459% ( 39) 00:09:54.228 9628.751 - 9679.163: 92.5590% ( 39) 00:09:54.228 9679.163 - 9729.575: 92.7721% ( 39) 00:09:54.228 9729.575 - 9779.988: 92.9688% ( 36) 00:09:54.228 9779.988 - 9830.400: 93.1818% ( 39) 00:09:54.228 9830.400 - 9880.812: 93.3730% ( 35) 00:09:54.228 9880.812 - 9931.225: 93.5588% ( 34) 00:09:54.228 9931.225 - 9981.637: 93.7445% ( 34) 00:09:54.228 9981.637 - 10032.049: 93.9030% ( 29) 00:09:54.228 10032.049 - 10082.462: 94.0559% ( 28) 00:09:54.228 10082.462 - 10132.874: 94.2253% ( 31) 00:09:54.228 10132.874 - 10183.286: 94.4001% ( 32) 00:09:54.228 10183.286 - 10233.698: 94.5586% ( 29) 00:09:54.228 10233.698 - 10284.111: 94.7225% ( 30) 00:09:54.228 10284.111 - 10334.523: 94.8700% ( 27) 00:09:54.228 10334.523 - 10384.935: 95.0229% ( 28) 00:09:54.228 10384.935 - 10435.348: 95.1541% ( 24) 00:09:54.228 10435.348 - 10485.760: 95.2906% ( 25) 00:09:54.228 10485.760 - 10536.172: 95.3944% ( 19) 00:09:54.228 10536.172 - 10586.585: 95.5256% ( 24) 00:09:54.228 10586.585 - 10636.997: 95.6458% ( 22) 00:09:54.228 10636.997 - 10687.409: 95.7987% ( 28) 00:09:54.228 10687.409 - 10737.822: 95.9408% ( 26) 00:09:54.228 10737.822 - 10788.234: 96.0391% ( 18) 00:09:54.228 10788.234 - 10838.646: 96.1484% ( 20) 00:09:54.228 10838.646 - 10889.058: 96.2522% ( 19) 00:09:54.228 10889.058 - 10939.471: 96.3451% ( 17) 00:09:54.228 10939.471 - 10989.883: 96.4543% ( 20) 00:09:54.228 10989.883 - 11040.295: 96.5472% ( 17) 00:09:54.228 11040.295 - 11090.708: 96.6455% ( 18) 00:09:54.228 11090.708 - 11141.120: 96.7439% ( 18) 00:09:54.228 11141.120 - 11191.532: 96.8313% ( 16) 00:09:54.229 11191.532 - 11241.945: 96.9187% ( 16) 00:09:54.229 11241.945 - 11292.357: 97.0061% ( 16) 00:09:54.229 11292.357 - 11342.769: 97.0990% ( 17) 00:09:54.229 11342.769 - 11393.182: 97.1809% ( 15) 00:09:54.229 11393.182 - 11443.594: 97.2684% ( 16) 00:09:54.229 11443.594 - 11494.006: 97.3558% ( 16) 00:09:54.229 11494.006 - 11544.418: 97.4377% ( 15) 00:09:54.229 11544.418 - 11594.831: 97.5361% ( 18) 00:09:54.229 11594.831 - 11645.243: 97.6125% ( 14) 00:09:54.229 11645.243 - 11695.655: 97.7000% ( 16) 00:09:54.229 11695.655 - 11746.068: 97.7764% ( 14) 00:09:54.229 11746.068 - 11796.480: 97.8693% ( 17) 00:09:54.229 11796.480 - 11846.892: 97.9567% ( 16) 00:09:54.229 11846.892 - 11897.305: 98.0387% ( 15) 00:09:54.229 11897.305 - 11947.717: 98.1206% ( 15) 00:09:54.229 11947.717 - 11998.129: 98.1698% ( 9) 00:09:54.229 11998.129 - 12048.542: 98.2135% ( 8) 00:09:54.229 12048.542 - 12098.954: 98.2627% ( 9) 00:09:54.229 12098.954 - 12149.366: 98.3118% ( 9) 00:09:54.229 12149.366 - 12199.778: 98.3665% ( 10) 00:09:54.229 12199.778 - 12250.191: 98.4266% ( 11) 00:09:54.229 12250.191 - 12300.603: 98.4703% ( 8) 00:09:54.229 12300.603 - 12351.015: 98.5140% ( 8) 00:09:54.229 12351.015 - 12401.428: 98.5522% ( 7) 00:09:54.229 12401.428 - 12451.840: 98.5905% ( 7) 00:09:54.229 12451.840 - 12502.252: 98.6178% ( 5) 00:09:54.229 12502.252 - 12552.665: 98.6342% ( 3) 00:09:54.229 12552.665 - 12603.077: 98.6506% ( 3) 00:09:54.229 12603.077 - 12653.489: 98.6615% ( 2) 00:09:54.229 12653.489 - 12703.902: 98.6670% ( 1) 00:09:54.229 12703.902 - 12754.314: 98.6724% ( 1) 00:09:54.229 12754.314 - 12804.726: 98.6833% ( 2) 00:09:54.229 12804.726 - 12855.138: 98.6888% ( 1) 00:09:54.229 12855.138 - 12905.551: 98.6997% ( 2) 00:09:54.229 12905.551 - 13006.375: 98.7161% ( 3) 00:09:54.229 13006.375 - 13107.200: 98.7271% ( 2) 00:09:54.229 13107.200 - 13208.025: 98.7434% ( 3) 00:09:54.229 13208.025 - 13308.849: 98.7598% ( 3) 00:09:54.229 13308.849 - 13409.674: 98.7762% ( 3) 00:09:54.229 13409.674 - 13510.498: 98.7926% ( 3) 00:09:54.229 13510.498 - 13611.323: 98.8090% ( 3) 00:09:54.229 13611.323 - 13712.148: 98.8254% ( 3) 00:09:54.229 13712.148 - 13812.972: 98.8418% ( 3) 00:09:54.229 13812.972 - 13913.797: 98.8527% ( 2) 00:09:54.229 13913.797 - 14014.622: 98.8691% ( 3) 00:09:54.229 14014.622 - 14115.446: 98.8855% ( 3) 00:09:54.229 14115.446 - 14216.271: 98.8964% ( 2) 00:09:54.229 14216.271 - 14317.095: 98.9128% ( 3) 00:09:54.229 14317.095 - 14417.920: 98.9292% ( 3) 00:09:54.229 14417.920 - 14518.745: 98.9456% ( 3) 00:09:54.229 14518.745 - 14619.569: 98.9620% ( 3) 00:09:54.229 14619.569 - 14720.394: 98.9784% ( 3) 00:09:54.229 14720.394 - 14821.218: 98.9948% ( 3) 00:09:54.229 14821.218 - 14922.043: 99.0057% ( 2) 00:09:54.229 14922.043 - 15022.868: 99.0221% ( 3) 00:09:54.229 15022.868 - 15123.692: 99.0385% ( 3) 00:09:54.229 15123.692 - 15224.517: 99.0549% ( 3) 00:09:54.229 15224.517 - 15325.342: 99.0712% ( 3) 00:09:54.229 15325.342 - 15426.166: 99.0876% ( 3) 00:09:54.229 15426.166 - 15526.991: 99.1040% ( 3) 00:09:54.229 15526.991 - 15627.815: 99.1204% ( 3) 00:09:54.229 15627.815 - 15728.640: 99.1313% ( 2) 00:09:54.229 15728.640 - 15829.465: 99.1423% ( 2) 00:09:54.229 15829.465 - 15930.289: 99.1587% ( 3) 00:09:54.229 15930.289 - 16031.114: 99.1750% ( 3) 00:09:54.229 16031.114 - 16131.938: 99.1914% ( 3) 00:09:54.229 16131.938 - 16232.763: 99.2024% ( 2) 00:09:54.229 16232.763 - 16333.588: 99.2188% ( 3) 00:09:54.229 16333.588 - 16434.412: 99.2351% ( 3) 00:09:54.229 16434.412 - 16535.237: 99.2515% ( 3) 00:09:54.229 16535.237 - 16636.062: 99.2679% ( 3) 00:09:54.229 16636.062 - 16736.886: 99.2788% ( 2) 00:09:54.229 16736.886 - 16837.711: 99.2952% ( 3) 00:09:54.229 16837.711 - 16938.535: 99.3007% ( 1) 00:09:54.229 19257.502 - 19358.326: 99.3062% ( 1) 00:09:54.229 19358.326 - 19459.151: 99.3280% ( 4) 00:09:54.229 19459.151 - 19559.975: 99.3444% ( 3) 00:09:54.229 19559.975 - 19660.800: 99.3717% ( 5) 00:09:54.229 19660.800 - 19761.625: 99.3881% ( 3) 00:09:54.229 19761.625 - 19862.449: 99.4100% ( 4) 00:09:54.229 19862.449 - 19963.274: 99.4318% ( 4) 00:09:54.229 19963.274 - 20064.098: 99.4427% ( 2) 00:09:54.229 20064.098 - 20164.923: 99.4646% ( 4) 00:09:54.229 20164.923 - 20265.748: 99.4810% ( 3) 00:09:54.229 20265.748 - 20366.572: 99.5028% ( 4) 00:09:54.229 20366.572 - 20467.397: 99.5247% ( 4) 00:09:54.229 20467.397 - 20568.222: 99.5411% ( 3) 00:09:54.229 20568.222 - 20669.046: 99.5629% ( 4) 00:09:54.229 20669.046 - 20769.871: 99.5848% ( 4) 00:09:54.229 20769.871 - 20870.695: 99.6012% ( 3) 00:09:54.229 20870.695 - 20971.520: 99.6176% ( 3) 00:09:54.229 20971.520 - 21072.345: 99.6394% ( 4) 00:09:54.229 21072.345 - 21173.169: 99.6558% ( 3) 00:09:54.229 21173.169 - 21273.994: 99.6777% ( 4) 00:09:54.229 21273.994 - 21374.818: 99.6941% ( 3) 00:09:54.229 21374.818 - 21475.643: 99.7159% ( 4) 00:09:54.229 21475.643 - 21576.468: 99.7323% ( 3) 00:09:54.229 21576.468 - 21677.292: 99.7487% ( 3) 00:09:54.229 21677.292 - 21778.117: 99.7705% ( 4) 00:09:54.229 21778.117 - 21878.942: 99.7924% ( 4) 00:09:54.229 21878.942 - 21979.766: 99.8088% ( 3) 00:09:54.229 21979.766 - 22080.591: 99.8306% ( 4) 00:09:54.229 22080.591 - 22181.415: 99.8525% ( 4) 00:09:54.229 22181.415 - 22282.240: 99.8689% ( 3) 00:09:54.229 22282.240 - 22383.065: 99.8907% ( 4) 00:09:54.229 22383.065 - 22483.889: 99.9126% ( 4) 00:09:54.229 22483.889 - 22584.714: 99.9290% ( 3) 00:09:54.229 22584.714 - 22685.538: 99.9508% ( 4) 00:09:54.229 22685.538 - 22786.363: 99.9672% ( 3) 00:09:54.229 22786.363 - 22887.188: 99.9891% ( 4) 00:09:54.229 22887.188 - 22988.012: 100.0000% ( 2) 00:09:54.229 00:09:54.229 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:54.229 ============================================================================== 00:09:54.230 Range in us Cumulative IO count 00:09:54.230 5192.468 - 5217.674: 0.0164% ( 3) 00:09:54.230 5217.674 - 5242.880: 0.0710% ( 10) 00:09:54.230 5242.880 - 5268.086: 0.1912% ( 22) 00:09:54.230 5268.086 - 5293.292: 0.4207% ( 42) 00:09:54.230 5293.292 - 5318.498: 0.6392% ( 40) 00:09:54.230 5318.498 - 5343.705: 0.8851% ( 45) 00:09:54.230 5343.705 - 5368.911: 1.2128% ( 60) 00:09:54.230 5368.911 - 5394.117: 1.6444% ( 79) 00:09:54.230 5394.117 - 5419.323: 2.3000% ( 120) 00:09:54.230 5419.323 - 5444.529: 2.8846% ( 107) 00:09:54.230 5444.529 - 5469.735: 3.5129% ( 115) 00:09:54.230 5469.735 - 5494.942: 4.1357% ( 114) 00:09:54.230 5494.942 - 5520.148: 4.8295% ( 127) 00:09:54.230 5520.148 - 5545.354: 5.6873% ( 157) 00:09:54.230 5545.354 - 5570.560: 6.5778% ( 163) 00:09:54.230 5570.560 - 5595.766: 7.6267% ( 192) 00:09:54.230 5595.766 - 5620.972: 8.6265% ( 183) 00:09:54.230 5620.972 - 5646.178: 9.6536% ( 188) 00:09:54.230 5646.178 - 5671.385: 10.6753% ( 187) 00:09:54.230 5671.385 - 5696.591: 11.7133% ( 190) 00:09:54.230 5696.591 - 5721.797: 12.8114% ( 201) 00:09:54.230 5721.797 - 5747.003: 13.8604% ( 192) 00:09:54.230 5747.003 - 5772.209: 14.9202% ( 194) 00:09:54.230 5772.209 - 5797.415: 15.9965% ( 197) 00:09:54.230 5797.415 - 5822.622: 17.0181% ( 187) 00:09:54.230 5822.622 - 5847.828: 18.0125% ( 182) 00:09:54.230 5847.828 - 5873.034: 19.0778% ( 195) 00:09:54.230 5873.034 - 5898.240: 20.2196% ( 209) 00:09:54.230 5898.240 - 5923.446: 21.2576% ( 190) 00:09:54.230 5923.446 - 5948.652: 22.3776% ( 205) 00:09:54.230 5948.652 - 5973.858: 23.5194% ( 209) 00:09:54.230 5973.858 - 5999.065: 24.6667% ( 210) 00:09:54.230 5999.065 - 6024.271: 25.7812% ( 204) 00:09:54.230 6024.271 - 6049.477: 26.9012% ( 205) 00:09:54.230 6049.477 - 6074.683: 28.0321% ( 207) 00:09:54.230 6074.683 - 6099.889: 29.1466% ( 204) 00:09:54.230 6099.889 - 6125.095: 30.2994% ( 211) 00:09:54.230 6125.095 - 6150.302: 31.4467% ( 210) 00:09:54.230 6150.302 - 6175.508: 32.6158% ( 214) 00:09:54.230 6175.508 - 6200.714: 33.7631% ( 210) 00:09:54.230 6200.714 - 6225.920: 34.9541% ( 218) 00:09:54.230 6225.920 - 6251.126: 36.1123% ( 212) 00:09:54.230 6251.126 - 6276.332: 37.2924% ( 216) 00:09:54.230 6276.332 - 6301.538: 38.5052% ( 222) 00:09:54.230 6301.538 - 6326.745: 39.7017% ( 219) 00:09:54.230 6326.745 - 6351.951: 40.8872% ( 217) 00:09:54.230 6351.951 - 6377.157: 42.0892% ( 220) 00:09:54.230 6377.157 - 6402.363: 43.2965% ( 221) 00:09:54.230 6402.363 - 6427.569: 44.4712% ( 215) 00:09:54.230 6427.569 - 6452.775: 45.7113% ( 227) 00:09:54.230 6452.775 - 6503.188: 48.1479% ( 446) 00:09:54.230 6503.188 - 6553.600: 50.6010% ( 449) 00:09:54.230 6553.600 - 6604.012: 53.0157% ( 442) 00:09:54.230 6604.012 - 6654.425: 55.4578% ( 447) 00:09:54.230 6654.425 - 6704.837: 57.9436% ( 455) 00:09:54.230 6704.837 - 6755.249: 60.4458% ( 458) 00:09:54.230 6755.249 - 6805.662: 62.9644% ( 461) 00:09:54.230 6805.662 - 6856.074: 65.4447% ( 454) 00:09:54.230 6856.074 - 6906.486: 67.8595% ( 442) 00:09:54.230 6906.486 - 6956.898: 70.3070% ( 448) 00:09:54.230 6956.898 - 7007.311: 72.5087% ( 403) 00:09:54.230 7007.311 - 7057.723: 74.5411% ( 372) 00:09:54.230 7057.723 - 7108.135: 76.2238% ( 308) 00:09:54.230 7108.135 - 7158.548: 77.6224% ( 256) 00:09:54.230 7158.548 - 7208.960: 78.7096% ( 199) 00:09:54.230 7208.960 - 7259.372: 79.5564% ( 155) 00:09:54.230 7259.372 - 7309.785: 80.2065% ( 119) 00:09:54.230 7309.785 - 7360.197: 80.7747% ( 104) 00:09:54.230 7360.197 - 7410.609: 81.2773% ( 92) 00:09:54.230 7410.609 - 7461.022: 81.7854% ( 93) 00:09:54.230 7461.022 - 7511.434: 82.2990% ( 94) 00:09:54.230 7511.434 - 7561.846: 82.7579% ( 84) 00:09:54.230 7561.846 - 7612.258: 83.1895% ( 79) 00:09:54.230 7612.258 - 7662.671: 83.6538% ( 85) 00:09:54.230 7662.671 - 7713.083: 84.0800% ( 78) 00:09:54.230 7713.083 - 7763.495: 84.4788% ( 73) 00:09:54.230 7763.495 - 7813.908: 84.8394% ( 66) 00:09:54.230 7813.908 - 7864.320: 85.1890% ( 64) 00:09:54.230 7864.320 - 7914.732: 85.5278% ( 62) 00:09:54.230 7914.732 - 7965.145: 85.8719% ( 63) 00:09:54.230 7965.145 - 8015.557: 86.1943% ( 59) 00:09:54.230 8015.557 - 8065.969: 86.5111% ( 58) 00:09:54.230 8065.969 - 8116.382: 86.8062% ( 54) 00:09:54.230 8116.382 - 8166.794: 87.1012% ( 54) 00:09:54.230 8166.794 - 8217.206: 87.3634% ( 48) 00:09:54.230 8217.206 - 8267.618: 87.6420% ( 51) 00:09:54.230 8267.618 - 8318.031: 87.9261% ( 52) 00:09:54.230 8318.031 - 8368.443: 88.1720% ( 45) 00:09:54.230 8368.443 - 8418.855: 88.4288% ( 47) 00:09:54.230 8418.855 - 8469.268: 88.6691% ( 44) 00:09:54.230 8469.268 - 8519.680: 88.8986% ( 42) 00:09:54.230 8519.680 - 8570.092: 89.0898% ( 35) 00:09:54.230 8570.092 - 8620.505: 89.2865% ( 36) 00:09:54.230 8620.505 - 8670.917: 89.5160% ( 42) 00:09:54.230 8670.917 - 8721.329: 89.7290% ( 39) 00:09:54.230 8721.329 - 8771.742: 89.9530% ( 41) 00:09:54.230 8771.742 - 8822.154: 90.1552% ( 37) 00:09:54.230 8822.154 - 8872.566: 90.3628% ( 38) 00:09:54.230 8872.566 - 8922.978: 90.5485% ( 34) 00:09:54.230 8922.978 - 8973.391: 90.7561% ( 38) 00:09:54.230 8973.391 - 9023.803: 90.9473% ( 35) 00:09:54.230 9023.803 - 9074.215: 91.1440% ( 36) 00:09:54.230 9074.215 - 9124.628: 91.3407% ( 36) 00:09:54.230 9124.628 - 9175.040: 91.5538% ( 39) 00:09:54.230 9175.040 - 9225.452: 91.7504% ( 36) 00:09:54.230 9225.452 - 9275.865: 91.9417% ( 35) 00:09:54.230 9275.865 - 9326.277: 92.1056% ( 30) 00:09:54.230 9326.277 - 9376.689: 92.2694% ( 30) 00:09:54.230 9376.689 - 9427.102: 92.4060% ( 25) 00:09:54.230 9427.102 - 9477.514: 92.5535% ( 27) 00:09:54.230 9477.514 - 9527.926: 92.6901% ( 25) 00:09:54.230 9527.926 - 9578.338: 92.8212% ( 24) 00:09:54.230 9578.338 - 9628.751: 92.9578% ( 25) 00:09:54.230 9628.751 - 9679.163: 93.0999% ( 26) 00:09:54.230 9679.163 - 9729.575: 93.2255% ( 23) 00:09:54.230 9729.575 - 9779.988: 93.3621% ( 25) 00:09:54.230 9779.988 - 9830.400: 93.5096% ( 27) 00:09:54.230 9830.400 - 9880.812: 93.6407% ( 24) 00:09:54.230 9880.812 - 9931.225: 93.7500% ( 20) 00:09:54.230 9931.225 - 9981.637: 93.8429% ( 17) 00:09:54.230 9981.637 - 10032.049: 93.9248% ( 15) 00:09:54.230 10032.049 - 10082.462: 94.0232% ( 18) 00:09:54.230 10082.462 - 10132.874: 94.1215% ( 18) 00:09:54.230 10132.874 - 10183.286: 94.2089% ( 16) 00:09:54.230 10183.286 - 10233.698: 94.2854% ( 14) 00:09:54.230 10233.698 - 10284.111: 94.4165% ( 24) 00:09:54.230 10284.111 - 10334.523: 94.4766% ( 11) 00:09:54.230 10334.523 - 10384.935: 94.5422% ( 12) 00:09:54.230 10384.935 - 10435.348: 94.6077% ( 12) 00:09:54.230 10435.348 - 10485.760: 94.6951% ( 16) 00:09:54.230 10485.760 - 10536.172: 94.7716% ( 14) 00:09:54.230 10536.172 - 10586.585: 94.8481% ( 14) 00:09:54.230 10586.585 - 10636.997: 94.9301% ( 15) 00:09:54.230 10636.997 - 10687.409: 95.0120% ( 15) 00:09:54.230 10687.409 - 10737.822: 95.0776% ( 12) 00:09:54.230 10737.822 - 10788.234: 95.1650% ( 16) 00:09:54.230 10788.234 - 10838.646: 95.2524% ( 16) 00:09:54.230 10838.646 - 10889.058: 95.3234% ( 13) 00:09:54.230 10889.058 - 10939.471: 95.3944% ( 13) 00:09:54.230 10939.471 - 10989.883: 95.4928% ( 18) 00:09:54.230 10989.883 - 11040.295: 95.5857% ( 17) 00:09:54.230 11040.295 - 11090.708: 95.6622% ( 14) 00:09:54.230 11090.708 - 11141.120: 95.7933% ( 24) 00:09:54.230 11141.120 - 11191.532: 95.8971% ( 19) 00:09:54.230 11191.532 - 11241.945: 95.9736% ( 14) 00:09:54.230 11241.945 - 11292.357: 96.0828% ( 20) 00:09:54.230 11292.357 - 11342.769: 96.1866% ( 19) 00:09:54.230 11342.769 - 11393.182: 96.2795% ( 17) 00:09:54.230 11393.182 - 11443.594: 96.3778% ( 18) 00:09:54.230 11443.594 - 11494.006: 96.4816% ( 19) 00:09:54.231 11494.006 - 11544.418: 96.5745% ( 17) 00:09:54.231 11544.418 - 11594.831: 96.6838% ( 20) 00:09:54.231 11594.831 - 11645.243: 96.7821% ( 18) 00:09:54.231 11645.243 - 11695.655: 96.8859% ( 19) 00:09:54.231 11695.655 - 11746.068: 96.9843% ( 18) 00:09:54.231 11746.068 - 11796.480: 97.0935% ( 20) 00:09:54.231 11796.480 - 11846.892: 97.1919% ( 18) 00:09:54.231 11846.892 - 11897.305: 97.2902% ( 18) 00:09:54.231 11897.305 - 11947.717: 97.3776% ( 16) 00:09:54.231 11947.717 - 11998.129: 97.4541% ( 14) 00:09:54.231 11998.129 - 12048.542: 97.5361% ( 15) 00:09:54.231 12048.542 - 12098.954: 97.6180% ( 15) 00:09:54.231 12098.954 - 12149.366: 97.7054% ( 16) 00:09:54.231 12149.366 - 12199.778: 97.7819% ( 14) 00:09:54.231 12199.778 - 12250.191: 97.8693% ( 16) 00:09:54.231 12250.191 - 12300.603: 97.9731% ( 19) 00:09:54.231 12300.603 - 12351.015: 98.0332% ( 11) 00:09:54.231 12351.015 - 12401.428: 98.0988% ( 12) 00:09:54.231 12401.428 - 12451.840: 98.1589% ( 11) 00:09:54.231 12451.840 - 12502.252: 98.2244% ( 12) 00:09:54.231 12502.252 - 12552.665: 98.2900% ( 12) 00:09:54.231 12552.665 - 12603.077: 98.3501% ( 11) 00:09:54.231 12603.077 - 12653.489: 98.4102% ( 11) 00:09:54.231 12653.489 - 12703.902: 98.4484% ( 7) 00:09:54.231 12703.902 - 12754.314: 98.4757% ( 5) 00:09:54.231 12754.314 - 12804.726: 98.5031% ( 5) 00:09:54.231 12804.726 - 12855.138: 98.5249% ( 4) 00:09:54.231 12855.138 - 12905.551: 98.5577% ( 6) 00:09:54.231 12905.551 - 13006.375: 98.6123% ( 10) 00:09:54.231 13006.375 - 13107.200: 98.6670% ( 10) 00:09:54.231 13107.200 - 13208.025: 98.7161% ( 9) 00:09:54.231 13208.025 - 13308.849: 98.7653% ( 9) 00:09:54.231 13308.849 - 13409.674: 98.7817% ( 3) 00:09:54.231 13409.674 - 13510.498: 98.7981% ( 3) 00:09:54.231 13510.498 - 13611.323: 98.8145% ( 3) 00:09:54.231 13611.323 - 13712.148: 98.8254% ( 2) 00:09:54.231 13712.148 - 13812.972: 98.8418% ( 3) 00:09:54.231 13812.972 - 13913.797: 98.8582% ( 3) 00:09:54.231 13913.797 - 14014.622: 98.8746% ( 3) 00:09:54.231 14014.622 - 14115.446: 98.8910% ( 3) 00:09:54.231 14115.446 - 14216.271: 98.9073% ( 3) 00:09:54.231 14216.271 - 14317.095: 98.9237% ( 3) 00:09:54.231 14317.095 - 14417.920: 98.9401% ( 3) 00:09:54.231 14417.920 - 14518.745: 98.9510% ( 2) 00:09:54.231 14518.745 - 14619.569: 98.9674% ( 3) 00:09:54.231 14619.569 - 14720.394: 98.9784% ( 2) 00:09:54.231 14720.394 - 14821.218: 98.9948% ( 3) 00:09:54.231 14821.218 - 14922.043: 99.0111% ( 3) 00:09:54.231 14922.043 - 15022.868: 99.0221% ( 2) 00:09:54.231 15022.868 - 15123.692: 99.0385% ( 3) 00:09:54.231 15123.692 - 15224.517: 99.0549% ( 3) 00:09:54.231 15224.517 - 15325.342: 99.0712% ( 3) 00:09:54.231 15325.342 - 15426.166: 99.0876% ( 3) 00:09:54.231 15426.166 - 15526.991: 99.1095% ( 4) 00:09:54.231 15526.991 - 15627.815: 99.1313% ( 4) 00:09:54.231 15627.815 - 15728.640: 99.1532% ( 4) 00:09:54.231 15728.640 - 15829.465: 99.1805% ( 5) 00:09:54.231 15829.465 - 15930.289: 99.2024% ( 4) 00:09:54.231 15930.289 - 16031.114: 99.2188% ( 3) 00:09:54.231 16031.114 - 16131.938: 99.2461% ( 5) 00:09:54.231 16131.938 - 16232.763: 99.2625% ( 3) 00:09:54.231 16232.763 - 16333.588: 99.2843% ( 4) 00:09:54.231 16333.588 - 16434.412: 99.3007% ( 3) 00:09:54.231 18652.554 - 18753.378: 99.3663% ( 12) 00:09:54.231 18753.378 - 18854.203: 99.3936% ( 5) 00:09:54.231 18854.203 - 18955.028: 99.4045% ( 2) 00:09:54.231 18955.028 - 19055.852: 99.4100% ( 1) 00:09:54.231 19055.852 - 19156.677: 99.4318% ( 4) 00:09:54.231 19156.677 - 19257.502: 99.4537% ( 4) 00:09:54.231 19257.502 - 19358.326: 99.4701% ( 3) 00:09:54.231 19358.326 - 19459.151: 99.4919% ( 4) 00:09:54.231 19459.151 - 19559.975: 99.5083% ( 3) 00:09:54.231 19559.975 - 19660.800: 99.5302% ( 4) 00:09:54.231 19660.800 - 19761.625: 99.5520% ( 4) 00:09:54.231 19761.625 - 19862.449: 99.5684% ( 3) 00:09:54.231 19862.449 - 19963.274: 99.5903% ( 4) 00:09:54.231 19963.274 - 20064.098: 99.6066% ( 3) 00:09:54.231 20064.098 - 20164.923: 99.6285% ( 4) 00:09:54.231 20164.923 - 20265.748: 99.6503% ( 4) 00:09:54.231 20265.748 - 20366.572: 99.6667% ( 3) 00:09:54.231 20366.572 - 20467.397: 99.6886% ( 4) 00:09:54.231 20467.397 - 20568.222: 99.7050% ( 3) 00:09:54.231 20568.222 - 20669.046: 99.7268% ( 4) 00:09:54.231 20669.046 - 20769.871: 99.7432% ( 3) 00:09:54.231 20769.871 - 20870.695: 99.7651% ( 4) 00:09:54.231 20870.695 - 20971.520: 99.7815% ( 3) 00:09:54.231 20971.520 - 21072.345: 99.8033% ( 4) 00:09:54.231 21072.345 - 21173.169: 99.8197% ( 3) 00:09:54.231 21173.169 - 21273.994: 99.8416% ( 4) 00:09:54.231 21273.994 - 21374.818: 99.8634% ( 4) 00:09:54.231 21374.818 - 21475.643: 99.8798% ( 3) 00:09:54.231 21475.643 - 21576.468: 99.9017% ( 4) 00:09:54.231 21576.468 - 21677.292: 99.9235% ( 4) 00:09:54.231 21677.292 - 21778.117: 99.9399% ( 3) 00:09:54.231 21778.117 - 21878.942: 99.9563% ( 3) 00:09:54.231 21878.942 - 21979.766: 99.9781% ( 4) 00:09:54.231 21979.766 - 22080.591: 100.0000% ( 4) 00:09:54.231 00:09:54.231 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:54.231 ============================================================================== 00:09:54.231 Range in us Cumulative IO count 00:09:54.231 5192.468 - 5217.674: 0.0109% ( 2) 00:09:54.231 5217.674 - 5242.880: 0.0765% ( 12) 00:09:54.231 5242.880 - 5268.086: 0.2185% ( 26) 00:09:54.231 5268.086 - 5293.292: 0.3824% ( 30) 00:09:54.231 5293.292 - 5318.498: 0.6119% ( 42) 00:09:54.231 5318.498 - 5343.705: 0.7867% ( 32) 00:09:54.231 5343.705 - 5368.911: 1.0708% ( 52) 00:09:54.231 5368.911 - 5394.117: 1.5898% ( 95) 00:09:54.231 5394.117 - 5419.323: 2.2072% ( 113) 00:09:54.231 5419.323 - 5444.529: 2.8300% ( 114) 00:09:54.231 5444.529 - 5469.735: 3.5566% ( 133) 00:09:54.231 5469.735 - 5494.942: 4.3652% ( 148) 00:09:54.231 5494.942 - 5520.148: 5.0481% ( 125) 00:09:54.231 5520.148 - 5545.354: 5.7474% ( 128) 00:09:54.231 5545.354 - 5570.560: 6.5778% ( 152) 00:09:54.231 5570.560 - 5595.766: 7.5393% ( 176) 00:09:54.231 5595.766 - 5620.972: 8.6265% ( 199) 00:09:54.231 5620.972 - 5646.178: 9.6208% ( 182) 00:09:54.231 5646.178 - 5671.385: 10.6042% ( 180) 00:09:54.231 5671.385 - 5696.591: 11.6204% ( 186) 00:09:54.231 5696.591 - 5721.797: 12.7076% ( 199) 00:09:54.231 5721.797 - 5747.003: 13.7620% ( 193) 00:09:54.231 5747.003 - 5772.209: 14.8110% ( 192) 00:09:54.231 5772.209 - 5797.415: 15.8326% ( 187) 00:09:54.231 5797.415 - 5822.622: 16.9089% ( 197) 00:09:54.231 5822.622 - 5847.828: 17.9578% ( 192) 00:09:54.231 5847.828 - 5873.034: 19.0833% ( 206) 00:09:54.231 5873.034 - 5898.240: 20.2142% ( 207) 00:09:54.231 5898.240 - 5923.446: 21.2850% ( 196) 00:09:54.231 5923.446 - 5948.652: 22.3121% ( 188) 00:09:54.231 5948.652 - 5973.858: 23.4102% ( 201) 00:09:54.231 5973.858 - 5999.065: 24.5138% ( 202) 00:09:54.231 5999.065 - 6024.271: 25.6556% ( 209) 00:09:54.231 6024.271 - 6049.477: 26.7592% ( 202) 00:09:54.231 6049.477 - 6074.683: 27.8792% ( 205) 00:09:54.231 6074.683 - 6099.889: 29.0483% ( 214) 00:09:54.231 6099.889 - 6125.095: 30.1300% ( 198) 00:09:54.231 6125.095 - 6150.302: 31.2500% ( 205) 00:09:54.231 6150.302 - 6175.508: 32.3754% ( 206) 00:09:54.231 6175.508 - 6200.714: 33.5282% ( 211) 00:09:54.231 6200.714 - 6225.920: 34.6864% ( 212) 00:09:54.231 6225.920 - 6251.126: 35.8501% ( 213) 00:09:54.231 6251.126 - 6276.332: 37.0192% ( 214) 00:09:54.231 6276.332 - 6301.538: 38.2321% ( 222) 00:09:54.231 6301.538 - 6326.745: 39.4285% ( 219) 00:09:54.231 6326.745 - 6351.951: 40.6359% ( 221) 00:09:54.231 6351.951 - 6377.157: 41.8433% ( 221) 00:09:54.231 6377.157 - 6402.363: 43.0070% ( 213) 00:09:54.231 6402.363 - 6427.569: 44.2581% ( 229) 00:09:54.231 6427.569 - 6452.775: 45.4491% ( 218) 00:09:54.232 6452.775 - 6503.188: 47.9185% ( 452) 00:09:54.232 6503.188 - 6553.600: 50.3169% ( 439) 00:09:54.232 6553.600 - 6604.012: 52.7262% ( 441) 00:09:54.232 6604.012 - 6654.425: 55.1847% ( 450) 00:09:54.232 6654.425 - 6704.837: 57.5830% ( 439) 00:09:54.232 6704.837 - 6755.249: 60.0361% ( 449) 00:09:54.232 6755.249 - 6805.662: 62.5164% ( 454) 00:09:54.232 6805.662 - 6856.074: 65.0022% ( 455) 00:09:54.232 6856.074 - 6906.486: 67.3787% ( 435) 00:09:54.232 6906.486 - 6956.898: 69.7552% ( 435) 00:09:54.232 6956.898 - 7007.311: 71.9460% ( 401) 00:09:54.232 7007.311 - 7057.723: 73.8691% ( 352) 00:09:54.232 7057.723 - 7108.135: 75.5409% ( 306) 00:09:54.232 7108.135 - 7158.548: 76.8630% ( 242) 00:09:54.232 7158.548 - 7208.960: 77.9556% ( 200) 00:09:54.232 7208.960 - 7259.372: 78.8899% ( 171) 00:09:54.232 7259.372 - 7309.785: 79.6056% ( 131) 00:09:54.232 7309.785 - 7360.197: 80.2775% ( 123) 00:09:54.232 7360.197 - 7410.609: 80.9003% ( 114) 00:09:54.232 7410.609 - 7461.022: 81.4412% ( 99) 00:09:54.232 7461.022 - 7511.434: 81.9602% ( 95) 00:09:54.232 7511.434 - 7561.846: 82.4902% ( 97) 00:09:54.232 7561.846 - 7612.258: 82.9655% ( 87) 00:09:54.232 7612.258 - 7662.671: 83.4626% ( 91) 00:09:54.232 7662.671 - 7713.083: 83.9052% ( 81) 00:09:54.232 7713.083 - 7763.495: 84.3258% ( 77) 00:09:54.232 7763.495 - 7813.908: 84.7356% ( 75) 00:09:54.232 7813.908 - 7864.320: 85.1016% ( 67) 00:09:54.232 7864.320 - 7914.732: 85.5114% ( 75) 00:09:54.232 7914.732 - 7965.145: 85.8392% ( 60) 00:09:54.232 7965.145 - 8015.557: 86.1506% ( 57) 00:09:54.232 8015.557 - 8065.969: 86.4237% ( 50) 00:09:54.232 8065.969 - 8116.382: 86.6969% ( 50) 00:09:54.232 8116.382 - 8166.794: 86.9755% ( 51) 00:09:54.232 8166.794 - 8217.206: 87.2542% ( 51) 00:09:54.232 8217.206 - 8267.618: 87.5328% ( 51) 00:09:54.232 8267.618 - 8318.031: 87.7841% ( 46) 00:09:54.232 8318.031 - 8368.443: 88.0573% ( 50) 00:09:54.232 8368.443 - 8418.855: 88.3031% ( 45) 00:09:54.232 8418.855 - 8469.268: 88.5653% ( 48) 00:09:54.232 8469.268 - 8519.680: 88.8276% ( 48) 00:09:54.232 8519.680 - 8570.092: 89.1062% ( 51) 00:09:54.232 8570.092 - 8620.505: 89.4012% ( 54) 00:09:54.232 8620.505 - 8670.917: 89.6853% ( 52) 00:09:54.232 8670.917 - 8721.329: 89.9913% ( 56) 00:09:54.232 8721.329 - 8771.742: 90.3081% ( 58) 00:09:54.232 8771.742 - 8822.154: 90.6141% ( 56) 00:09:54.232 8822.154 - 8872.566: 90.8982% ( 52) 00:09:54.232 8872.566 - 8922.978: 91.1823% ( 52) 00:09:54.232 8922.978 - 8973.391: 91.4445% ( 48) 00:09:54.232 8973.391 - 9023.803: 91.6794% ( 43) 00:09:54.232 9023.803 - 9074.215: 91.8925% ( 39) 00:09:54.232 9074.215 - 9124.628: 92.0618% ( 31) 00:09:54.232 9124.628 - 9175.040: 92.2367% ( 32) 00:09:54.232 9175.040 - 9225.452: 92.4170% ( 33) 00:09:54.232 9225.452 - 9275.865: 92.5863% ( 31) 00:09:54.232 9275.865 - 9326.277: 92.7611% ( 32) 00:09:54.232 9326.277 - 9376.689: 92.9360% ( 32) 00:09:54.232 9376.689 - 9427.102: 93.0999% ( 30) 00:09:54.232 9427.102 - 9477.514: 93.2692% ( 31) 00:09:54.232 9477.514 - 9527.926: 93.4386% ( 31) 00:09:54.232 9527.926 - 9578.338: 93.5697% ( 24) 00:09:54.232 9578.338 - 9628.751: 93.7008% ( 24) 00:09:54.232 9628.751 - 9679.163: 93.8265% ( 23) 00:09:54.232 9679.163 - 9729.575: 93.9412% ( 21) 00:09:54.232 9729.575 - 9779.988: 94.0559% ( 21) 00:09:54.232 9779.988 - 9830.400: 94.1597% ( 19) 00:09:54.232 9830.400 - 9880.812: 94.2581% ( 18) 00:09:54.232 9880.812 - 9931.225: 94.3728% ( 21) 00:09:54.232 9931.225 - 9981.637: 94.4657% ( 17) 00:09:54.232 9981.637 - 10032.049: 94.5586% ( 17) 00:09:54.232 10032.049 - 10082.462: 94.6514% ( 17) 00:09:54.232 10082.462 - 10132.874: 94.7170% ( 12) 00:09:54.232 10132.874 - 10183.286: 94.7826% ( 12) 00:09:54.232 10183.286 - 10233.698: 94.8317% ( 9) 00:09:54.232 10233.698 - 10284.111: 94.8809% ( 9) 00:09:54.232 10284.111 - 10334.523: 94.9301% ( 9) 00:09:54.232 10334.523 - 10384.935: 94.9847% ( 10) 00:09:54.232 10384.935 - 10435.348: 95.0120% ( 5) 00:09:54.232 10435.348 - 10485.760: 95.0393% ( 5) 00:09:54.232 10485.760 - 10536.172: 95.0721% ( 6) 00:09:54.232 10536.172 - 10586.585: 95.0994% ( 5) 00:09:54.232 10586.585 - 10636.997: 95.1322% ( 6) 00:09:54.232 10636.997 - 10687.409: 95.1595% ( 5) 00:09:54.232 10687.409 - 10737.822: 95.1868% ( 5) 00:09:54.232 10737.822 - 10788.234: 95.2196% ( 6) 00:09:54.232 10788.234 - 10838.646: 95.2469% ( 5) 00:09:54.232 10838.646 - 10889.058: 95.2797% ( 6) 00:09:54.232 10889.058 - 10939.471: 95.3070% ( 5) 00:09:54.232 10939.471 - 10989.883: 95.3344% ( 5) 00:09:54.232 10989.883 - 11040.295: 95.3781% ( 8) 00:09:54.232 11040.295 - 11090.708: 95.4163% ( 7) 00:09:54.232 11090.708 - 11141.120: 95.5857% ( 31) 00:09:54.232 11141.120 - 11191.532: 95.6294% ( 8) 00:09:54.232 11191.532 - 11241.945: 95.6895% ( 11) 00:09:54.232 11241.945 - 11292.357: 95.7277% ( 7) 00:09:54.232 11292.357 - 11342.769: 95.7933% ( 12) 00:09:54.232 11342.769 - 11393.182: 95.8752% ( 15) 00:09:54.232 11393.182 - 11443.594: 95.9626% ( 16) 00:09:54.232 11443.594 - 11494.006: 96.0391% ( 14) 00:09:54.232 11494.006 - 11544.418: 96.1265% ( 16) 00:09:54.232 11544.418 - 11594.831: 96.2085% ( 15) 00:09:54.232 11594.831 - 11645.243: 96.2959% ( 16) 00:09:54.232 11645.243 - 11695.655: 96.3833% ( 16) 00:09:54.232 11695.655 - 11746.068: 96.4707% ( 16) 00:09:54.232 11746.068 - 11796.480: 96.5581% ( 16) 00:09:54.232 11796.480 - 11846.892: 96.6237% ( 12) 00:09:54.232 11846.892 - 11897.305: 96.7056% ( 15) 00:09:54.232 11897.305 - 11947.717: 96.7931% ( 16) 00:09:54.232 11947.717 - 11998.129: 96.8805% ( 16) 00:09:54.232 11998.129 - 12048.542: 96.9679% ( 16) 00:09:54.232 12048.542 - 12098.954: 97.0498% ( 15) 00:09:54.232 12098.954 - 12149.366: 97.1263% ( 14) 00:09:54.232 12149.366 - 12199.778: 97.2083% ( 15) 00:09:54.232 12199.778 - 12250.191: 97.2902% ( 15) 00:09:54.232 12250.191 - 12300.603: 97.3776% ( 16) 00:09:54.232 12300.603 - 12351.015: 97.4541% ( 14) 00:09:54.232 12351.015 - 12401.428: 97.5361% ( 15) 00:09:54.232 12401.428 - 12451.840: 97.6235% ( 16) 00:09:54.232 12451.840 - 12502.252: 97.7163% ( 17) 00:09:54.232 12502.252 - 12552.665: 97.7983% ( 15) 00:09:54.232 12552.665 - 12603.077: 97.8857% ( 16) 00:09:54.232 12603.077 - 12653.489: 97.9622% ( 14) 00:09:54.232 12653.489 - 12703.902: 98.0278% ( 12) 00:09:54.232 12703.902 - 12754.314: 98.0878% ( 11) 00:09:54.232 12754.314 - 12804.726: 98.1534% ( 12) 00:09:54.232 12804.726 - 12855.138: 98.2190% ( 12) 00:09:54.232 12855.138 - 12905.551: 98.2736% ( 10) 00:09:54.232 12905.551 - 13006.375: 98.3719% ( 18) 00:09:54.232 13006.375 - 13107.200: 98.4594% ( 16) 00:09:54.232 13107.200 - 13208.025: 98.5577% ( 18) 00:09:54.232 13208.025 - 13308.849: 98.6451% ( 16) 00:09:54.232 13308.849 - 13409.674: 98.7161% ( 13) 00:09:54.232 13409.674 - 13510.498: 98.8035% ( 16) 00:09:54.232 13510.498 - 13611.323: 98.8964% ( 17) 00:09:54.232 13611.323 - 13712.148: 98.9401% ( 8) 00:09:54.232 13712.148 - 13812.972: 98.9784% ( 7) 00:09:54.232 13812.972 - 13913.797: 99.0275% ( 9) 00:09:54.232 13913.797 - 14014.622: 99.0712% ( 8) 00:09:54.232 14014.622 - 14115.446: 99.1149% ( 8) 00:09:54.232 14115.446 - 14216.271: 99.1477% ( 6) 00:09:54.232 14216.271 - 14317.095: 99.1750% ( 5) 00:09:54.232 14317.095 - 14417.920: 99.1969% ( 4) 00:09:54.232 14417.920 - 14518.745: 99.2188% ( 4) 00:09:54.232 14518.745 - 14619.569: 99.2351% ( 3) 00:09:54.232 14619.569 - 14720.394: 99.2625% ( 5) 00:09:54.232 14720.394 - 14821.218: 99.2843% ( 4) 00:09:54.232 14821.218 - 14922.043: 99.3007% ( 3) 00:09:54.232 17644.308 - 17745.132: 99.3444% ( 8) 00:09:54.232 17745.132 - 17845.957: 99.3608% ( 3) 00:09:54.233 17845.957 - 17946.782: 99.3717% ( 2) 00:09:54.233 17946.782 - 18047.606: 99.3936% ( 4) 00:09:54.233 18047.606 - 18148.431: 99.4154% ( 4) 00:09:54.233 18148.431 - 18249.255: 99.4318% ( 3) 00:09:54.233 18249.255 - 18350.080: 99.4537% ( 4) 00:09:54.233 18350.080 - 18450.905: 99.4701% ( 3) 00:09:54.233 18450.905 - 18551.729: 99.4919% ( 4) 00:09:54.233 18551.729 - 18652.554: 99.5083% ( 3) 00:09:54.233 18652.554 - 18753.378: 99.5302% ( 4) 00:09:54.233 18753.378 - 18854.203: 99.5465% ( 3) 00:09:54.233 18854.203 - 18955.028: 99.5684% ( 4) 00:09:54.233 18955.028 - 19055.852: 99.5903% ( 4) 00:09:54.233 19055.852 - 19156.677: 99.6066% ( 3) 00:09:54.233 19156.677 - 19257.502: 99.6230% ( 3) 00:09:54.233 19257.502 - 19358.326: 99.6449% ( 4) 00:09:54.233 19358.326 - 19459.151: 99.6667% ( 4) 00:09:54.233 19459.151 - 19559.975: 99.6831% ( 3) 00:09:54.233 19559.975 - 19660.800: 99.6995% ( 3) 00:09:54.233 19660.800 - 19761.625: 99.7214% ( 4) 00:09:54.233 19761.625 - 19862.449: 99.7378% ( 3) 00:09:54.233 19862.449 - 19963.274: 99.7596% ( 4) 00:09:54.233 19963.274 - 20064.098: 99.7760% ( 3) 00:09:54.233 20064.098 - 20164.923: 99.7979% ( 4) 00:09:54.233 20164.923 - 20265.748: 99.8197% ( 4) 00:09:54.233 20265.748 - 20366.572: 99.8416% ( 4) 00:09:54.233 20366.572 - 20467.397: 99.8580% ( 3) 00:09:54.233 20467.397 - 20568.222: 99.8798% ( 4) 00:09:54.233 20568.222 - 20669.046: 99.8962% ( 3) 00:09:54.233 20669.046 - 20769.871: 99.9181% ( 4) 00:09:54.233 20769.871 - 20870.695: 99.9344% ( 3) 00:09:54.233 20870.695 - 20971.520: 99.9454% ( 2) 00:09:54.233 20971.520 - 21072.345: 99.9672% ( 4) 00:09:54.233 21072.345 - 21173.169: 99.9891% ( 4) 00:09:54.233 21173.169 - 21273.994: 100.0000% ( 2) 00:09:54.233 00:09:54.233 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:54.233 ============================================================================== 00:09:54.233 Range in us Cumulative IO count 00:09:54.233 5217.674 - 5242.880: 0.0219% ( 4) 00:09:54.233 5242.880 - 5268.086: 0.1748% ( 28) 00:09:54.233 5268.086 - 5293.292: 0.3606% ( 34) 00:09:54.233 5293.292 - 5318.498: 0.6064% ( 45) 00:09:54.233 5318.498 - 5343.705: 0.9397% ( 61) 00:09:54.233 5343.705 - 5368.911: 1.3276% ( 71) 00:09:54.233 5368.911 - 5394.117: 1.8848% ( 102) 00:09:54.233 5394.117 - 5419.323: 2.3929% ( 93) 00:09:54.233 5419.323 - 5444.529: 2.9884% ( 109) 00:09:54.233 5444.529 - 5469.735: 3.7314% ( 136) 00:09:54.233 5469.735 - 5494.942: 4.3925% ( 121) 00:09:54.233 5494.942 - 5520.148: 5.0481% ( 120) 00:09:54.233 5520.148 - 5545.354: 5.8566% ( 148) 00:09:54.233 5545.354 - 5570.560: 6.8127% ( 175) 00:09:54.233 5570.560 - 5595.766: 7.8125% ( 183) 00:09:54.233 5595.766 - 5620.972: 8.8724% ( 194) 00:09:54.233 5620.972 - 5646.178: 9.8558% ( 180) 00:09:54.233 5646.178 - 5671.385: 10.8883% ( 189) 00:09:54.233 5671.385 - 5696.591: 11.8881% ( 183) 00:09:54.233 5696.591 - 5721.797: 12.9043% ( 186) 00:09:54.233 5721.797 - 5747.003: 13.9368% ( 189) 00:09:54.233 5747.003 - 5772.209: 15.0240% ( 199) 00:09:54.233 5772.209 - 5797.415: 16.0894% ( 195) 00:09:54.233 5797.415 - 5822.622: 17.1984% ( 203) 00:09:54.233 5822.622 - 5847.828: 18.2474% ( 192) 00:09:54.233 5847.828 - 5873.034: 19.3400% ( 200) 00:09:54.233 5873.034 - 5898.240: 20.4163% ( 197) 00:09:54.233 5898.240 - 5923.446: 21.5090% ( 200) 00:09:54.233 5923.446 - 5948.652: 22.6726% ( 213) 00:09:54.233 5948.652 - 5973.858: 23.8035% ( 207) 00:09:54.233 5973.858 - 5999.065: 24.8853% ( 198) 00:09:54.233 5999.065 - 6024.271: 25.9779% ( 200) 00:09:54.233 6024.271 - 6049.477: 27.0924% ( 204) 00:09:54.233 6049.477 - 6074.683: 28.2288% ( 208) 00:09:54.233 6074.683 - 6099.889: 29.3215% ( 200) 00:09:54.233 6099.889 - 6125.095: 30.4414% ( 205) 00:09:54.233 6125.095 - 6150.302: 31.5396% ( 201) 00:09:54.233 6150.302 - 6175.508: 32.6541% ( 204) 00:09:54.233 6175.508 - 6200.714: 33.7686% ( 204) 00:09:54.233 6200.714 - 6225.920: 34.9268% ( 212) 00:09:54.233 6225.920 - 6251.126: 36.1123% ( 217) 00:09:54.233 6251.126 - 6276.332: 37.2542% ( 209) 00:09:54.233 6276.332 - 6301.538: 38.3905% ( 208) 00:09:54.233 6301.538 - 6326.745: 39.5815% ( 218) 00:09:54.233 6326.745 - 6351.951: 40.7233% ( 209) 00:09:54.233 6351.951 - 6377.157: 41.9089% ( 217) 00:09:54.233 6377.157 - 6402.363: 43.1217% ( 222) 00:09:54.233 6402.363 - 6427.569: 44.3619% ( 227) 00:09:54.233 6427.569 - 6452.775: 45.5365% ( 215) 00:09:54.233 6452.775 - 6503.188: 47.9895% ( 449) 00:09:54.233 6503.188 - 6553.600: 50.3988% ( 441) 00:09:54.233 6553.600 - 6604.012: 52.8628% ( 451) 00:09:54.233 6604.012 - 6654.425: 55.3212% ( 450) 00:09:54.233 6654.425 - 6704.837: 57.7196% ( 439) 00:09:54.233 6704.837 - 6755.249: 60.1726% ( 449) 00:09:54.233 6755.249 - 6805.662: 62.6858% ( 460) 00:09:54.233 6805.662 - 6856.074: 65.1224% ( 446) 00:09:54.233 6856.074 - 6906.486: 67.6027% ( 454) 00:09:54.233 6906.486 - 6956.898: 69.8645% ( 414) 00:09:54.233 6956.898 - 7007.311: 71.9952% ( 390) 00:09:54.233 7007.311 - 7057.723: 73.8582% ( 341) 00:09:54.233 7057.723 - 7108.135: 75.5354% ( 307) 00:09:54.233 7108.135 - 7158.548: 76.8575% ( 242) 00:09:54.233 7158.548 - 7208.960: 77.9338% ( 197) 00:09:54.233 7208.960 - 7259.372: 78.7533% ( 150) 00:09:54.233 7259.372 - 7309.785: 79.3378% ( 107) 00:09:54.233 7309.785 - 7360.197: 79.9115% ( 105) 00:09:54.233 7360.197 - 7410.609: 80.4633% ( 101) 00:09:54.233 7410.609 - 7461.022: 80.9714% ( 93) 00:09:54.233 7461.022 - 7511.434: 81.4958% ( 96) 00:09:54.233 7511.434 - 7561.846: 82.0258% ( 97) 00:09:54.233 7561.846 - 7612.258: 82.5120% ( 89) 00:09:54.233 7612.258 - 7662.671: 82.9983% ( 89) 00:09:54.233 7662.671 - 7713.083: 83.4681% ( 86) 00:09:54.233 7713.083 - 7763.495: 83.8942% ( 78) 00:09:54.233 7763.495 - 7813.908: 84.3204% ( 78) 00:09:54.233 7813.908 - 7864.320: 84.7629% ( 81) 00:09:54.233 7864.320 - 7914.732: 85.1672% ( 74) 00:09:54.233 7914.732 - 7965.145: 85.5824% ( 76) 00:09:54.233 7965.145 - 8015.557: 85.9703% ( 71) 00:09:54.233 8015.557 - 8065.969: 86.3090% ( 62) 00:09:54.233 8065.969 - 8116.382: 86.6641% ( 65) 00:09:54.233 8116.382 - 8166.794: 86.9701% ( 56) 00:09:54.233 8166.794 - 8217.206: 87.2815% ( 57) 00:09:54.233 8217.206 - 8267.618: 87.5983% ( 58) 00:09:54.233 8267.618 - 8318.031: 87.8715% ( 50) 00:09:54.233 8318.031 - 8368.443: 88.1337% ( 48) 00:09:54.233 8368.443 - 8418.855: 88.3741% ( 44) 00:09:54.233 8418.855 - 8469.268: 88.6309% ( 47) 00:09:54.233 8469.268 - 8519.680: 88.8549% ( 41) 00:09:54.233 8519.680 - 8570.092: 89.0734% ( 40) 00:09:54.233 8570.092 - 8620.505: 89.2810% ( 38) 00:09:54.233 8620.505 - 8670.917: 89.4941% ( 39) 00:09:54.233 8670.917 - 8721.329: 89.7181% ( 41) 00:09:54.233 8721.329 - 8771.742: 89.9257% ( 38) 00:09:54.233 8771.742 - 8822.154: 90.1442% ( 40) 00:09:54.233 8822.154 - 8872.566: 90.3354% ( 35) 00:09:54.233 8872.566 - 8922.978: 90.5267% ( 35) 00:09:54.233 8922.978 - 8973.391: 90.7069% ( 33) 00:09:54.233 8973.391 - 9023.803: 90.8872% ( 33) 00:09:54.233 9023.803 - 9074.215: 91.0730% ( 34) 00:09:54.233 9074.215 - 9124.628: 91.2915% ( 40) 00:09:54.233 9124.628 - 9175.040: 91.4827% ( 35) 00:09:54.233 9175.040 - 9225.452: 91.6630% ( 33) 00:09:54.233 9225.452 - 9275.865: 91.7832% ( 22) 00:09:54.233 9275.865 - 9326.277: 91.9089% ( 23) 00:09:54.233 9326.277 - 9376.689: 92.0400% ( 24) 00:09:54.233 9376.689 - 9427.102: 92.2094% ( 31) 00:09:54.233 9427.102 - 9477.514: 92.3896% ( 33) 00:09:54.233 9477.514 - 9527.926: 92.5317% ( 26) 00:09:54.233 9527.926 - 9578.338: 92.7120% ( 33) 00:09:54.234 9578.338 - 9628.751: 92.8704% ( 29) 00:09:54.234 9628.751 - 9679.163: 93.0070% ( 25) 00:09:54.234 9679.163 - 9729.575: 93.1654% ( 29) 00:09:54.234 9729.575 - 9779.988: 93.3403% ( 32) 00:09:54.234 9779.988 - 9830.400: 93.4714% ( 24) 00:09:54.234 9830.400 - 9880.812: 93.6134% ( 26) 00:09:54.234 9880.812 - 9931.225: 93.7555% ( 26) 00:09:54.234 9931.225 - 9981.637: 93.9030% ( 27) 00:09:54.234 9981.637 - 10032.049: 94.0396% ( 25) 00:09:54.234 10032.049 - 10082.462: 94.1761% ( 25) 00:09:54.234 10082.462 - 10132.874: 94.3182% ( 26) 00:09:54.234 10132.874 - 10183.286: 94.4384% ( 22) 00:09:54.234 10183.286 - 10233.698: 94.5640% ( 23) 00:09:54.234 10233.698 - 10284.111: 94.6733% ( 20) 00:09:54.234 10284.111 - 10334.523: 94.7716% ( 18) 00:09:54.234 10334.523 - 10384.935: 94.8754% ( 19) 00:09:54.234 10384.935 - 10435.348: 94.9847% ( 20) 00:09:54.234 10435.348 - 10485.760: 95.0885% ( 19) 00:09:54.234 10485.760 - 10536.172: 95.1923% ( 19) 00:09:54.234 10536.172 - 10586.585: 95.3016% ( 20) 00:09:54.234 10586.585 - 10636.997: 95.4108% ( 20) 00:09:54.234 10636.997 - 10687.409: 95.5201% ( 20) 00:09:54.234 10687.409 - 10737.822: 95.6622% ( 26) 00:09:54.234 10737.822 - 10788.234: 95.7660% ( 19) 00:09:54.234 10788.234 - 10838.646: 95.9025% ( 25) 00:09:54.234 10838.646 - 10889.058: 96.0063% ( 19) 00:09:54.234 10889.058 - 10939.471: 96.1211% ( 21) 00:09:54.234 10939.471 - 10989.883: 96.2249% ( 19) 00:09:54.234 10989.883 - 11040.295: 96.3232% ( 18) 00:09:54.234 11040.295 - 11090.708: 96.3888% ( 12) 00:09:54.234 11090.708 - 11141.120: 96.4161% ( 5) 00:09:54.234 11141.120 - 11191.532: 96.4598% ( 8) 00:09:54.234 11191.532 - 11241.945: 96.4980% ( 7) 00:09:54.234 11241.945 - 11292.357: 96.5472% ( 9) 00:09:54.234 11292.357 - 11342.769: 96.5909% ( 8) 00:09:54.234 11342.769 - 11393.182: 96.6346% ( 8) 00:09:54.234 11393.182 - 11443.594: 96.6838% ( 9) 00:09:54.234 11443.594 - 11494.006: 96.7493% ( 12) 00:09:54.234 11494.006 - 11544.418: 96.8204% ( 13) 00:09:54.234 11544.418 - 11594.831: 96.8805% ( 11) 00:09:54.234 11594.831 - 11645.243: 96.9406% ( 11) 00:09:54.234 11645.243 - 11695.655: 96.9952% ( 10) 00:09:54.234 11695.655 - 11746.068: 97.0608% ( 12) 00:09:54.234 11746.068 - 11796.480: 97.1154% ( 10) 00:09:54.234 11796.480 - 11846.892: 97.1809% ( 12) 00:09:54.234 11846.892 - 11897.305: 97.2410% ( 11) 00:09:54.234 11897.305 - 11947.717: 97.3066% ( 12) 00:09:54.234 11947.717 - 11998.129: 97.3667% ( 11) 00:09:54.234 11998.129 - 12048.542: 97.4377% ( 13) 00:09:54.234 12048.542 - 12098.954: 97.4924% ( 10) 00:09:54.234 12098.954 - 12149.366: 97.5524% ( 11) 00:09:54.234 12149.366 - 12199.778: 97.6180% ( 12) 00:09:54.234 12199.778 - 12250.191: 97.6781% ( 11) 00:09:54.234 12250.191 - 12300.603: 97.7382% ( 11) 00:09:54.234 12300.603 - 12351.015: 97.8038% ( 12) 00:09:54.234 12351.015 - 12401.428: 97.8639% ( 11) 00:09:54.234 12401.428 - 12451.840: 97.9240% ( 11) 00:09:54.234 12451.840 - 12502.252: 97.9895% ( 12) 00:09:54.234 12502.252 - 12552.665: 98.0551% ( 12) 00:09:54.234 12552.665 - 12603.077: 98.1261% ( 13) 00:09:54.234 12603.077 - 12653.489: 98.1753% ( 9) 00:09:54.234 12653.489 - 12703.902: 98.2244% ( 9) 00:09:54.234 12703.902 - 12754.314: 98.2736% ( 9) 00:09:54.234 12754.314 - 12804.726: 98.3118% ( 7) 00:09:54.234 12804.726 - 12855.138: 98.3501% ( 7) 00:09:54.234 12855.138 - 12905.551: 98.3883% ( 7) 00:09:54.234 12905.551 - 13006.375: 98.4594% ( 13) 00:09:54.234 13006.375 - 13107.200: 98.5358% ( 14) 00:09:54.234 13107.200 - 13208.025: 98.6123% ( 14) 00:09:54.234 13208.025 - 13308.849: 98.6943% ( 15) 00:09:54.234 13308.849 - 13409.674: 98.7489% ( 10) 00:09:54.234 13409.674 - 13510.498: 98.7981% ( 9) 00:09:54.234 13510.498 - 13611.323: 98.8418% ( 8) 00:09:54.234 13611.323 - 13712.148: 98.8636% ( 4) 00:09:54.234 13712.148 - 13812.972: 98.8855% ( 4) 00:09:54.234 13812.972 - 13913.797: 98.9073% ( 4) 00:09:54.234 13913.797 - 14014.622: 98.9347% ( 5) 00:09:54.234 14014.622 - 14115.446: 98.9565% ( 4) 00:09:54.234 14115.446 - 14216.271: 98.9784% ( 4) 00:09:54.234 14216.271 - 14317.095: 99.0002% ( 4) 00:09:54.234 14317.095 - 14417.920: 99.0221% ( 4) 00:09:54.234 14417.920 - 14518.745: 99.0439% ( 4) 00:09:54.234 14518.745 - 14619.569: 99.0658% ( 4) 00:09:54.234 14619.569 - 14720.394: 99.0876% ( 4) 00:09:54.234 14720.394 - 14821.218: 99.1149% ( 5) 00:09:54.234 14821.218 - 14922.043: 99.1368% ( 4) 00:09:54.234 14922.043 - 15022.868: 99.1587% ( 4) 00:09:54.234 15022.868 - 15123.692: 99.1805% ( 4) 00:09:54.234 15123.692 - 15224.517: 99.2024% ( 4) 00:09:54.234 15224.517 - 15325.342: 99.2297% ( 5) 00:09:54.234 15325.342 - 15426.166: 99.2515% ( 4) 00:09:54.234 15426.166 - 15526.991: 99.2734% ( 4) 00:09:54.234 15526.991 - 15627.815: 99.2952% ( 4) 00:09:54.234 15627.815 - 15728.640: 99.3007% ( 1) 00:09:54.234 16736.886 - 16837.711: 99.3226% ( 4) 00:09:54.234 16837.711 - 16938.535: 99.3389% ( 3) 00:09:54.234 16938.535 - 17039.360: 99.3553% ( 3) 00:09:54.234 17039.360 - 17140.185: 99.3826% ( 5) 00:09:54.234 17140.185 - 17241.009: 99.3990% ( 3) 00:09:54.234 17241.009 - 17341.834: 99.4209% ( 4) 00:09:54.234 17341.834 - 17442.658: 99.4427% ( 4) 00:09:54.234 17442.658 - 17543.483: 99.4591% ( 3) 00:09:54.234 17543.483 - 17644.308: 99.4810% ( 4) 00:09:54.234 17644.308 - 17745.132: 99.4974% ( 3) 00:09:54.234 17745.132 - 17845.957: 99.5192% ( 4) 00:09:54.234 17845.957 - 17946.782: 99.5356% ( 3) 00:09:54.234 17946.782 - 18047.606: 99.5575% ( 4) 00:09:54.234 18047.606 - 18148.431: 99.5793% ( 4) 00:09:54.234 18148.431 - 18249.255: 99.5957% ( 3) 00:09:54.234 18249.255 - 18350.080: 99.6176% ( 4) 00:09:54.234 18350.080 - 18450.905: 99.6394% ( 4) 00:09:54.234 18450.905 - 18551.729: 99.6558% ( 3) 00:09:54.234 18551.729 - 18652.554: 99.6722% ( 3) 00:09:54.234 18652.554 - 18753.378: 99.6941% ( 4) 00:09:54.234 18753.378 - 18854.203: 99.7159% ( 4) 00:09:54.234 18854.203 - 18955.028: 99.7323% ( 3) 00:09:54.234 18955.028 - 19055.852: 99.7542% ( 4) 00:09:54.234 19055.852 - 19156.677: 99.7705% ( 3) 00:09:54.234 19156.677 - 19257.502: 99.7924% ( 4) 00:09:54.234 19257.502 - 19358.326: 99.8142% ( 4) 00:09:54.234 19358.326 - 19459.151: 99.8361% ( 4) 00:09:54.234 19459.151 - 19559.975: 99.8525% ( 3) 00:09:54.234 19559.975 - 19660.800: 99.8743% ( 4) 00:09:54.234 19660.800 - 19761.625: 99.8907% ( 3) 00:09:54.234 19761.625 - 19862.449: 99.9126% ( 4) 00:09:54.234 19862.449 - 19963.274: 99.9344% ( 4) 00:09:54.234 19963.274 - 20064.098: 99.9508% ( 3) 00:09:54.234 20064.098 - 20164.923: 99.9727% ( 4) 00:09:54.234 20164.923 - 20265.748: 99.9891% ( 3) 00:09:54.234 20265.748 - 20366.572: 100.0000% ( 2) 00:09:54.234 00:09:54.234 07:25:03 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:55.614 Initializing NVMe Controllers 00:09:55.614 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:55.614 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:55.614 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:55.614 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:55.614 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:55.614 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:55.614 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:55.614 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:55.614 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:55.614 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:55.614 Initialization complete. Launching workers. 00:09:55.614 ======================================================== 00:09:55.614 Latency(us) 00:09:55.614 Device Information : IOPS MiB/s Average min max 00:09:55.614 PCIE (0000:00:06.0) NSID 1 from core 0: 18163.69 212.86 7044.03 4930.54 27516.93 00:09:55.614 PCIE (0000:00:07.0) NSID 1 from core 0: 18163.69 212.86 7038.80 5276.94 26821.03 00:09:55.614 PCIE (0000:00:09.0) NSID 1 from core 0: 18163.69 212.86 7033.38 5077.76 26318.39 00:09:55.614 PCIE (0000:00:08.0) NSID 1 from core 0: 18163.69 212.86 7027.83 5066.08 25241.79 00:09:55.614 PCIE (0000:00:08.0) NSID 2 from core 0: 18163.69 212.86 7022.24 5168.15 24041.70 00:09:55.614 PCIE (0000:00:08.0) NSID 3 from core 0: 18163.69 212.86 7016.70 5163.51 22873.14 00:09:55.614 ======================================================== 00:09:55.614 Total : 108982.11 1277.13 7030.50 4930.54 27516.93 00:09:55.614 00:09:55.614 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:55.614 ================================================================================= 00:09:55.614 1.00000% : 5368.911us 00:09:55.614 10.00000% : 5847.828us 00:09:55.614 25.00000% : 6099.889us 00:09:55.614 50.00000% : 6503.188us 00:09:55.614 75.00000% : 7158.548us 00:09:55.614 90.00000% : 9074.215us 00:09:55.614 95.00000% : 10838.646us 00:09:55.614 98.00000% : 11998.129us 00:09:55.614 99.00000% : 12754.314us 00:09:55.614 99.50000% : 24500.382us 00:09:55.614 99.90000% : 27020.997us 00:09:55.614 99.99000% : 27625.945us 00:09:55.614 99.99900% : 27625.945us 00:09:55.614 99.99990% : 27625.945us 00:09:55.614 99.99999% : 27625.945us 00:09:55.614 00:09:55.614 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:55.614 ================================================================================= 00:09:55.615 1.00000% : 5721.797us 00:09:55.615 10.00000% : 6024.271us 00:09:55.615 25.00000% : 6225.920us 00:09:55.615 50.00000% : 6452.775us 00:09:55.615 75.00000% : 6956.898us 00:09:55.615 90.00000% : 8922.978us 00:09:55.615 95.00000% : 10737.822us 00:09:55.615 98.00000% : 12098.954us 00:09:55.615 99.00000% : 12905.551us 00:09:55.615 99.50000% : 24298.732us 00:09:55.615 99.90000% : 26416.049us 00:09:55.615 99.99000% : 26819.348us 00:09:55.615 99.99900% : 27020.997us 00:09:55.615 99.99990% : 27020.997us 00:09:55.615 99.99999% : 27020.997us 00:09:55.615 00:09:55.615 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:55.615 ================================================================================= 00:09:55.615 1.00000% : 5646.178us 00:09:55.615 10.00000% : 5973.858us 00:09:55.615 25.00000% : 6200.714us 00:09:55.615 50.00000% : 6452.775us 00:09:55.615 75.00000% : 6956.898us 00:09:55.615 90.00000% : 8922.978us 00:09:55.615 95.00000% : 10435.348us 00:09:55.615 98.00000% : 12098.954us 00:09:55.615 99.00000% : 12653.489us 00:09:55.615 99.50000% : 23794.609us 00:09:55.615 99.90000% : 25811.102us 00:09:55.615 99.99000% : 26416.049us 00:09:55.615 99.99900% : 26416.049us 00:09:55.615 99.99990% : 26416.049us 00:09:55.615 99.99999% : 26416.049us 00:09:55.615 00:09:55.615 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:55.615 ================================================================================= 00:09:55.615 1.00000% : 5671.385us 00:09:55.615 10.00000% : 5999.065us 00:09:55.615 25.00000% : 6225.920us 00:09:55.615 50.00000% : 6503.188us 00:09:55.615 75.00000% : 7007.311us 00:09:55.615 90.00000% : 8822.154us 00:09:55.615 95.00000% : 10384.935us 00:09:55.615 98.00000% : 11998.129us 00:09:55.615 99.00000% : 13308.849us 00:09:55.615 99.50000% : 23492.135us 00:09:55.615 99.90000% : 24702.031us 00:09:55.615 99.99000% : 25206.154us 00:09:55.615 99.99900% : 25306.978us 00:09:55.615 99.99990% : 25306.978us 00:09:55.615 99.99999% : 25306.978us 00:09:55.615 00:09:55.615 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:55.615 ================================================================================= 00:09:55.615 1.00000% : 5646.178us 00:09:55.615 10.00000% : 5999.065us 00:09:55.615 25.00000% : 6225.920us 00:09:55.615 50.00000% : 6452.775us 00:09:55.615 75.00000% : 7007.311us 00:09:55.615 90.00000% : 8872.566us 00:09:55.615 95.00000% : 10233.698us 00:09:55.615 98.00000% : 11746.068us 00:09:55.615 99.00000% : 13107.200us 00:09:55.615 99.50000% : 22786.363us 00:09:55.615 99.90000% : 23492.135us 00:09:55.615 99.99000% : 24097.083us 00:09:55.615 99.99900% : 24097.083us 00:09:55.615 99.99990% : 24097.083us 00:09:55.615 99.99999% : 24097.083us 00:09:55.615 00:09:55.615 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:55.615 ================================================================================= 00:09:55.615 1.00000% : 5671.385us 00:09:55.615 10.00000% : 5999.065us 00:09:55.615 25.00000% : 6225.920us 00:09:55.615 50.00000% : 6452.775us 00:09:55.615 75.00000% : 6956.898us 00:09:55.615 90.00000% : 8721.329us 00:09:55.615 95.00000% : 10737.822us 00:09:55.615 98.00000% : 11645.243us 00:09:55.615 99.00000% : 12048.542us 00:09:55.615 99.50000% : 22282.240us 00:09:55.615 99.90000% : 22584.714us 00:09:55.615 99.99000% : 22887.188us 00:09:55.615 99.99900% : 22887.188us 00:09:55.615 99.99990% : 22887.188us 00:09:55.615 99.99999% : 22887.188us 00:09:55.615 00:09:55.615 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:55.615 ============================================================================== 00:09:55.615 Range in us Cumulative IO count 00:09:55.615 4915.200 - 4940.406: 0.0055% ( 1) 00:09:55.615 4940.406 - 4965.612: 0.0220% ( 3) 00:09:55.615 4965.612 - 4990.818: 0.0385% ( 3) 00:09:55.615 4990.818 - 5016.025: 0.0715% ( 6) 00:09:55.615 5016.025 - 5041.231: 0.0935% ( 4) 00:09:55.615 5041.231 - 5066.437: 0.1320% ( 7) 00:09:55.615 5066.437 - 5091.643: 0.1761% ( 8) 00:09:55.615 5091.643 - 5116.849: 0.2091% ( 6) 00:09:55.615 5116.849 - 5142.055: 0.2531% ( 8) 00:09:55.615 5142.055 - 5167.262: 0.3191% ( 12) 00:09:55.615 5167.262 - 5192.468: 0.3741% ( 10) 00:09:55.615 5192.468 - 5217.674: 0.4346% ( 11) 00:09:55.615 5217.674 - 5242.880: 0.5007% ( 12) 00:09:55.615 5242.880 - 5268.086: 0.5722% ( 13) 00:09:55.615 5268.086 - 5293.292: 0.6547% ( 15) 00:09:55.615 5293.292 - 5318.498: 0.7757% ( 22) 00:09:55.615 5318.498 - 5343.705: 0.9133% ( 25) 00:09:55.615 5343.705 - 5368.911: 1.0949% ( 33) 00:09:55.615 5368.911 - 5394.117: 1.2159% ( 22) 00:09:55.615 5394.117 - 5419.323: 1.3644% ( 27) 00:09:55.615 5419.323 - 5444.529: 1.6560% ( 53) 00:09:55.615 5444.529 - 5469.735: 1.8376% ( 33) 00:09:55.615 5469.735 - 5494.942: 2.2007% ( 66) 00:09:55.615 5494.942 - 5520.148: 2.6023% ( 73) 00:09:55.615 5520.148 - 5545.354: 2.9159% ( 57) 00:09:55.615 5545.354 - 5570.560: 3.4496% ( 97) 00:09:55.615 5570.560 - 5595.766: 4.1373% ( 125) 00:09:55.615 5595.766 - 5620.972: 4.5500% ( 75) 00:09:55.615 5620.972 - 5646.178: 4.8691% ( 58) 00:09:55.615 5646.178 - 5671.385: 5.3862% ( 94) 00:09:55.615 5671.385 - 5696.591: 6.1015% ( 130) 00:09:55.615 5696.591 - 5721.797: 6.7176% ( 112) 00:09:55.615 5721.797 - 5747.003: 7.2293% ( 93) 00:09:55.615 5747.003 - 5772.209: 8.0821% ( 155) 00:09:55.615 5772.209 - 5797.415: 9.0339% ( 173) 00:09:55.615 5797.415 - 5822.622: 9.8812% ( 154) 00:09:55.615 5822.622 - 5847.828: 10.9540% ( 195) 00:09:55.615 5847.828 - 5873.034: 12.0764% ( 204) 00:09:55.615 5873.034 - 5898.240: 13.2978% ( 222) 00:09:55.615 5898.240 - 5923.446: 14.6567% ( 247) 00:09:55.615 5923.446 - 5948.652: 16.1422% ( 270) 00:09:55.615 5948.652 - 5973.858: 17.8587% ( 312) 00:09:55.615 5973.858 - 5999.065: 19.3112% ( 264) 00:09:55.615 5999.065 - 6024.271: 20.8902% ( 287) 00:09:55.615 6024.271 - 6049.477: 22.5407% ( 300) 00:09:55.615 6049.477 - 6074.683: 24.3398% ( 327) 00:09:55.615 6074.683 - 6099.889: 26.0618% ( 313) 00:09:55.615 6099.889 - 6125.095: 27.7729% ( 311) 00:09:55.615 6125.095 - 6150.302: 29.5114% ( 316) 00:09:55.615 6150.302 - 6175.508: 31.4426% ( 351) 00:09:55.616 6175.508 - 6200.714: 33.0271% ( 288) 00:09:55.616 6200.714 - 6225.920: 34.6556% ( 296) 00:09:55.616 6225.920 - 6251.126: 36.1631% ( 274) 00:09:55.616 6251.126 - 6276.332: 37.8246% ( 302) 00:09:55.616 6276.332 - 6301.538: 39.3541% ( 278) 00:09:55.616 6301.538 - 6326.745: 40.8231% ( 267) 00:09:55.616 6326.745 - 6351.951: 42.4076% ( 288) 00:09:55.616 6351.951 - 6377.157: 43.9261% ( 276) 00:09:55.616 6377.157 - 6402.363: 45.3400% ( 257) 00:09:55.616 6402.363 - 6427.569: 46.5559% ( 221) 00:09:55.616 6427.569 - 6452.775: 48.0909% ( 279) 00:09:55.616 6452.775 - 6503.188: 50.5502% ( 447) 00:09:55.616 6503.188 - 6553.600: 52.6959% ( 390) 00:09:55.616 6553.600 - 6604.012: 55.0451% ( 427) 00:09:55.616 6604.012 - 6654.425: 57.2348% ( 398) 00:09:55.616 6654.425 - 6704.837: 59.1989% ( 357) 00:09:55.616 6704.837 - 6755.249: 61.3941% ( 399) 00:09:55.616 6755.249 - 6805.662: 63.6334% ( 407) 00:09:55.616 6805.662 - 6856.074: 65.9551% ( 422) 00:09:55.616 6856.074 - 6906.486: 67.8477% ( 344) 00:09:55.616 6906.486 - 6956.898: 69.7293% ( 342) 00:09:55.616 6956.898 - 7007.311: 71.6549% ( 350) 00:09:55.616 7007.311 - 7057.723: 73.3440% ( 307) 00:09:55.616 7057.723 - 7108.135: 74.9890% ( 299) 00:09:55.616 7108.135 - 7158.548: 76.7165% ( 314) 00:09:55.616 7158.548 - 7208.960: 78.2130% ( 272) 00:09:55.616 7208.960 - 7259.372: 79.2804% ( 194) 00:09:55.616 7259.372 - 7309.785: 80.1001% ( 149) 00:09:55.616 7309.785 - 7360.197: 80.9309% ( 151) 00:09:55.616 7360.197 - 7410.609: 81.7782% ( 154) 00:09:55.616 7410.609 - 7461.022: 82.4714% ( 126) 00:09:55.616 7461.022 - 7511.434: 83.1151% ( 117) 00:09:55.616 7511.434 - 7561.846: 83.6378% ( 95) 00:09:55.616 7561.846 - 7612.258: 84.0669% ( 78) 00:09:55.616 7612.258 - 7662.671: 84.4575% ( 71) 00:09:55.616 7662.671 - 7713.083: 84.7766% ( 58) 00:09:55.616 7713.083 - 7763.495: 85.1067% ( 60) 00:09:55.616 7763.495 - 7813.908: 85.2828% ( 32) 00:09:55.616 7813.908 - 7864.320: 85.4478% ( 30) 00:09:55.616 7864.320 - 7914.732: 85.6624% ( 39) 00:09:55.616 7914.732 - 7965.145: 85.8935% ( 42) 00:09:55.616 7965.145 - 8015.557: 86.1741% ( 51) 00:09:55.616 8015.557 - 8065.969: 86.5152% ( 62) 00:09:55.616 8065.969 - 8116.382: 86.7573% ( 44) 00:09:55.616 8116.382 - 8166.794: 87.0599% ( 55) 00:09:55.616 8166.794 - 8217.206: 87.2414% ( 33) 00:09:55.616 8217.206 - 8267.618: 87.4175% ( 32) 00:09:55.616 8267.618 - 8318.031: 87.6430% ( 41) 00:09:55.616 8318.031 - 8368.443: 87.8521% ( 38) 00:09:55.616 8368.443 - 8418.855: 88.0117% ( 29) 00:09:55.616 8418.855 - 8469.268: 88.1437% ( 24) 00:09:55.616 8469.268 - 8519.680: 88.2592% ( 21) 00:09:55.616 8519.680 - 8570.092: 88.4298% ( 31) 00:09:55.616 8570.092 - 8620.505: 88.6004% ( 31) 00:09:55.616 8620.505 - 8670.917: 88.7434% ( 26) 00:09:55.616 8670.917 - 8721.329: 88.9085% ( 30) 00:09:55.616 8721.329 - 8771.742: 89.0900% ( 33) 00:09:55.616 8771.742 - 8822.154: 89.2991% ( 38) 00:09:55.616 8822.154 - 8872.566: 89.4861% ( 34) 00:09:55.616 8872.566 - 8922.978: 89.6787% ( 35) 00:09:55.616 8922.978 - 8973.391: 89.8382% ( 29) 00:09:55.616 8973.391 - 9023.803: 89.9868% ( 27) 00:09:55.616 9023.803 - 9074.215: 90.1629% ( 32) 00:09:55.616 9074.215 - 9124.628: 90.3719% ( 38) 00:09:55.616 9124.628 - 9175.040: 90.5645% ( 35) 00:09:55.616 9175.040 - 9225.452: 90.7956% ( 42) 00:09:55.616 9225.452 - 9275.865: 90.9771% ( 33) 00:09:55.616 9275.865 - 9326.277: 91.1257% ( 27) 00:09:55.616 9326.277 - 9376.689: 91.3347% ( 38) 00:09:55.616 9376.689 - 9427.102: 91.5383% ( 37) 00:09:55.616 9427.102 - 9477.514: 91.6538% ( 21) 00:09:55.616 9477.514 - 9527.926: 91.8244% ( 31) 00:09:55.616 9527.926 - 9578.338: 91.9344% ( 20) 00:09:55.616 9578.338 - 9628.751: 92.0665% ( 24) 00:09:55.616 9628.751 - 9679.163: 92.1985% ( 24) 00:09:55.616 9679.163 - 9729.575: 92.3801% ( 33) 00:09:55.616 9729.575 - 9779.988: 92.5616% ( 33) 00:09:55.616 9779.988 - 9830.400: 92.7102% ( 27) 00:09:55.616 9830.400 - 9880.812: 92.7927% ( 15) 00:09:55.616 9880.812 - 9931.225: 92.8972% ( 19) 00:09:55.616 9931.225 - 9981.637: 93.0293% ( 24) 00:09:55.616 9981.637 - 10032.049: 93.1173% ( 16) 00:09:55.616 10032.049 - 10082.462: 93.2383% ( 22) 00:09:55.616 10082.462 - 10132.874: 93.3374% ( 18) 00:09:55.616 10132.874 - 10183.286: 93.4529% ( 21) 00:09:55.616 10183.286 - 10233.698: 93.5794% ( 23) 00:09:55.616 10233.698 - 10284.111: 93.7280% ( 27) 00:09:55.616 10284.111 - 10334.523: 93.8325% ( 19) 00:09:55.616 10334.523 - 10384.935: 93.9316% ( 18) 00:09:55.616 10384.935 - 10435.348: 94.0306% ( 18) 00:09:55.616 10435.348 - 10485.760: 94.1241% ( 17) 00:09:55.616 10485.760 - 10536.172: 94.2672% ( 26) 00:09:55.616 10536.172 - 10586.585: 94.4872% ( 40) 00:09:55.616 10586.585 - 10636.997: 94.5973% ( 20) 00:09:55.616 10636.997 - 10687.409: 94.7183% ( 22) 00:09:55.616 10687.409 - 10737.822: 94.8063% ( 16) 00:09:55.616 10737.822 - 10788.234: 94.9164% ( 20) 00:09:55.616 10788.234 - 10838.646: 95.0319% ( 21) 00:09:55.616 10838.646 - 10889.058: 95.1750% ( 26) 00:09:55.616 10889.058 - 10939.471: 95.2960% ( 22) 00:09:55.616 10939.471 - 10989.883: 95.4776% ( 33) 00:09:55.616 10989.883 - 11040.295: 95.6096% ( 24) 00:09:55.617 11040.295 - 11090.708: 95.7746% ( 30) 00:09:55.617 11090.708 - 11141.120: 95.9342% ( 29) 00:09:55.617 11141.120 - 11191.532: 96.0662% ( 24) 00:09:55.617 11191.532 - 11241.945: 96.2478% ( 33) 00:09:55.617 11241.945 - 11292.357: 96.4018% ( 28) 00:09:55.617 11292.357 - 11342.769: 96.6384% ( 43) 00:09:55.617 11342.769 - 11393.182: 96.7870% ( 27) 00:09:55.617 11393.182 - 11443.594: 96.8750% ( 16) 00:09:55.617 11443.594 - 11494.006: 97.0290% ( 28) 00:09:55.617 11494.006 - 11544.418: 97.1501% ( 22) 00:09:55.617 11544.418 - 11594.831: 97.2546% ( 19) 00:09:55.617 11594.831 - 11645.243: 97.3647% ( 20) 00:09:55.617 11645.243 - 11695.655: 97.4637% ( 18) 00:09:55.617 11695.655 - 11746.068: 97.5902% ( 23) 00:09:55.617 11746.068 - 11796.480: 97.6838% ( 17) 00:09:55.617 11796.480 - 11846.892: 97.7828% ( 18) 00:09:55.617 11846.892 - 11897.305: 97.8543% ( 13) 00:09:55.617 11897.305 - 11947.717: 97.9643% ( 20) 00:09:55.617 11947.717 - 11998.129: 98.0744% ( 20) 00:09:55.617 11998.129 - 12048.542: 98.1459% ( 13) 00:09:55.617 12048.542 - 12098.954: 98.2339% ( 16) 00:09:55.617 12098.954 - 12149.366: 98.3000% ( 12) 00:09:55.617 12149.366 - 12199.778: 98.3605% ( 11) 00:09:55.617 12199.778 - 12250.191: 98.3990% ( 7) 00:09:55.617 12250.191 - 12300.603: 98.4265% ( 5) 00:09:55.617 12300.603 - 12351.015: 98.4705% ( 8) 00:09:55.617 12351.015 - 12401.428: 98.5805% ( 20) 00:09:55.617 12401.428 - 12451.840: 98.6466% ( 12) 00:09:55.617 12451.840 - 12502.252: 98.7456% ( 18) 00:09:55.617 12502.252 - 12552.665: 98.8281% ( 15) 00:09:55.617 12552.665 - 12603.077: 98.8721% ( 8) 00:09:55.617 12603.077 - 12653.489: 98.9107% ( 7) 00:09:55.617 12653.489 - 12703.902: 98.9712% ( 11) 00:09:55.617 12703.902 - 12754.314: 99.0042% ( 6) 00:09:55.617 12754.314 - 12804.726: 99.0482% ( 8) 00:09:55.617 12804.726 - 12855.138: 99.1032% ( 10) 00:09:55.617 12855.138 - 12905.551: 99.1417% ( 7) 00:09:55.617 12905.551 - 13006.375: 99.1967% ( 10) 00:09:55.617 13006.375 - 13107.200: 99.2188% ( 4) 00:09:55.617 13107.200 - 13208.025: 99.2298% ( 2) 00:09:55.617 13409.674 - 13510.498: 99.2463% ( 3) 00:09:55.617 13510.498 - 13611.323: 99.2683% ( 4) 00:09:55.617 13611.323 - 13712.148: 99.2903% ( 4) 00:09:55.617 14014.622 - 14115.446: 99.2958% ( 1) 00:09:55.617 23088.837 - 23189.662: 99.3233% ( 5) 00:09:55.617 23189.662 - 23290.486: 99.3288% ( 1) 00:09:55.617 23290.486 - 23391.311: 99.3508% ( 4) 00:09:55.617 23391.311 - 23492.135: 99.3673% ( 3) 00:09:55.617 23492.135 - 23592.960: 99.3783% ( 2) 00:09:55.617 23592.960 - 23693.785: 99.3893% ( 2) 00:09:55.617 23693.785 - 23794.609: 99.4058% ( 3) 00:09:55.617 23794.609 - 23895.434: 99.4223% ( 3) 00:09:55.617 23895.434 - 23996.258: 99.4333% ( 2) 00:09:55.617 23996.258 - 24097.083: 99.4498% ( 3) 00:09:55.617 24097.083 - 24197.908: 99.4663% ( 3) 00:09:55.617 24197.908 - 24298.732: 99.4828% ( 3) 00:09:55.617 24298.732 - 24399.557: 99.4938% ( 2) 00:09:55.617 24399.557 - 24500.382: 99.5103% ( 3) 00:09:55.617 24500.382 - 24601.206: 99.5268% ( 3) 00:09:55.617 24601.206 - 24702.031: 99.5434% ( 3) 00:09:55.617 24702.031 - 24802.855: 99.5599% ( 3) 00:09:55.617 24802.855 - 24903.680: 99.5709% ( 2) 00:09:55.617 24903.680 - 25004.505: 99.5874% ( 3) 00:09:55.617 25004.505 - 25105.329: 99.5984% ( 2) 00:09:55.617 25105.329 - 25206.154: 99.6149% ( 3) 00:09:55.617 25206.154 - 25306.978: 99.6314% ( 3) 00:09:55.617 25306.978 - 25407.803: 99.6424% ( 2) 00:09:55.617 25407.803 - 25508.628: 99.6589% ( 3) 00:09:55.617 25508.628 - 25609.452: 99.6754% ( 3) 00:09:55.617 25609.452 - 25710.277: 99.6919% ( 3) 00:09:55.617 25710.277 - 25811.102: 99.7139% ( 4) 00:09:55.617 25811.102 - 26012.751: 99.7469% ( 6) 00:09:55.617 26012.751 - 26214.400: 99.7854% ( 7) 00:09:55.617 26214.400 - 26416.049: 99.8184% ( 6) 00:09:55.617 26416.049 - 26617.698: 99.8515% ( 6) 00:09:55.617 26617.698 - 26819.348: 99.8790% ( 5) 00:09:55.617 26819.348 - 27020.997: 99.9175% ( 7) 00:09:55.617 27020.997 - 27222.646: 99.9505% ( 6) 00:09:55.617 27222.646 - 27424.295: 99.9835% ( 6) 00:09:55.617 27424.295 - 27625.945: 100.0000% ( 3) 00:09:55.617 00:09:55.617 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:55.617 ============================================================================== 00:09:55.617 Range in us Cumulative IO count 00:09:55.617 5268.086 - 5293.292: 0.0055% ( 1) 00:09:55.617 5318.498 - 5343.705: 0.0220% ( 3) 00:09:55.617 5343.705 - 5368.911: 0.0330% ( 2) 00:09:55.617 5368.911 - 5394.117: 0.0495% ( 3) 00:09:55.617 5394.117 - 5419.323: 0.0660% ( 3) 00:09:55.617 5419.323 - 5444.529: 0.0935% ( 5) 00:09:55.617 5444.529 - 5469.735: 0.1265% ( 6) 00:09:55.617 5469.735 - 5494.942: 0.1706% ( 8) 00:09:55.617 5494.942 - 5520.148: 0.2146% ( 8) 00:09:55.617 5520.148 - 5545.354: 0.2806% ( 12) 00:09:55.617 5545.354 - 5570.560: 0.3301% ( 9) 00:09:55.617 5570.560 - 5595.766: 0.4071% ( 14) 00:09:55.617 5595.766 - 5620.972: 0.4897% ( 15) 00:09:55.617 5620.972 - 5646.178: 0.5832% ( 17) 00:09:55.617 5646.178 - 5671.385: 0.7317% ( 27) 00:09:55.617 5671.385 - 5696.591: 0.9573% ( 41) 00:09:55.617 5696.591 - 5721.797: 1.1389% ( 33) 00:09:55.617 5721.797 - 5747.003: 1.3809% ( 44) 00:09:55.617 5747.003 - 5772.209: 1.7276% ( 63) 00:09:55.617 5772.209 - 5797.415: 2.1952% ( 85) 00:09:55.617 5797.415 - 5822.622: 2.6959% ( 91) 00:09:55.618 5822.622 - 5847.828: 3.6917% ( 181) 00:09:55.618 5847.828 - 5873.034: 4.3079% ( 112) 00:09:55.618 5873.034 - 5898.240: 4.9681% ( 120) 00:09:55.618 5898.240 - 5923.446: 5.7879% ( 149) 00:09:55.618 5923.446 - 5948.652: 6.6846% ( 163) 00:09:55.618 5948.652 - 5973.858: 7.7795% ( 199) 00:09:55.618 5973.858 - 5999.065: 8.8468% ( 194) 00:09:55.618 5999.065 - 6024.271: 10.4974% ( 300) 00:09:55.618 6024.271 - 6049.477: 12.0268% ( 278) 00:09:55.618 6049.477 - 6074.683: 13.4078% ( 251) 00:09:55.618 6074.683 - 6099.889: 15.6580% ( 409) 00:09:55.618 6099.889 - 6125.095: 18.0623% ( 437) 00:09:55.618 6125.095 - 6150.302: 19.9274% ( 339) 00:09:55.618 6150.302 - 6175.508: 22.1666% ( 407) 00:09:55.618 6175.508 - 6200.714: 24.5819% ( 439) 00:09:55.618 6200.714 - 6225.920: 27.3658% ( 506) 00:09:55.618 6225.920 - 6251.126: 30.3477% ( 542) 00:09:55.618 6251.126 - 6276.332: 32.6585% ( 420) 00:09:55.618 6276.332 - 6301.538: 35.4423% ( 506) 00:09:55.618 6301.538 - 6326.745: 37.3900% ( 354) 00:09:55.618 6326.745 - 6351.951: 40.9826% ( 653) 00:09:55.618 6351.951 - 6377.157: 44.6633% ( 669) 00:09:55.618 6377.157 - 6402.363: 47.4417% ( 505) 00:09:55.618 6402.363 - 6427.569: 49.4773% ( 370) 00:09:55.618 6427.569 - 6452.775: 51.6010% ( 386) 00:09:55.618 6452.775 - 6503.188: 55.4302% ( 696) 00:09:55.618 6503.188 - 6553.600: 58.3737% ( 535) 00:09:55.618 6553.600 - 6604.012: 60.6789% ( 419) 00:09:55.618 6604.012 - 6654.425: 63.6224% ( 535) 00:09:55.618 6654.425 - 6704.837: 66.8189% ( 581) 00:09:55.618 6704.837 - 6755.249: 69.1406% ( 422) 00:09:55.618 6755.249 - 6805.662: 71.3578% ( 403) 00:09:55.618 6805.662 - 6856.074: 73.0964% ( 316) 00:09:55.618 6856.074 - 6906.486: 74.9890% ( 344) 00:09:55.618 6906.486 - 6956.898: 77.4428% ( 446) 00:09:55.618 6956.898 - 7007.311: 78.7522% ( 238) 00:09:55.618 7007.311 - 7057.723: 79.9901% ( 225) 00:09:55.618 7057.723 - 7108.135: 80.8594% ( 158) 00:09:55.618 7108.135 - 7158.548: 81.6681% ( 147) 00:09:55.618 7158.548 - 7208.960: 82.6144% ( 172) 00:09:55.618 7208.960 - 7259.372: 83.0381% ( 77) 00:09:55.618 7259.372 - 7309.785: 83.5167% ( 87) 00:09:55.618 7309.785 - 7360.197: 83.8303% ( 57) 00:09:55.618 7360.197 - 7410.609: 84.0779% ( 45) 00:09:55.618 7410.609 - 7461.022: 84.2980% ( 40) 00:09:55.618 7461.022 - 7511.434: 84.4850% ( 34) 00:09:55.618 7511.434 - 7561.846: 84.7436% ( 47) 00:09:55.618 7561.846 - 7612.258: 85.0572% ( 57) 00:09:55.618 7612.258 - 7662.671: 85.2003% ( 26) 00:09:55.618 7662.671 - 7713.083: 85.3653% ( 30) 00:09:55.618 7713.083 - 7763.495: 85.5029% ( 25) 00:09:55.618 7763.495 - 7813.908: 85.5964% ( 17) 00:09:55.618 7813.908 - 7864.320: 85.6844% ( 16) 00:09:55.618 7864.320 - 7914.732: 85.8220% ( 25) 00:09:55.618 7914.732 - 7965.145: 85.9650% ( 26) 00:09:55.618 7965.145 - 8015.557: 86.1961% ( 42) 00:09:55.618 8015.557 - 8065.969: 86.4437% ( 45) 00:09:55.618 8065.969 - 8116.382: 86.6417% ( 36) 00:09:55.618 8116.382 - 8166.794: 86.8618% ( 40) 00:09:55.618 8166.794 - 8217.206: 87.0599% ( 36) 00:09:55.618 8217.206 - 8267.618: 87.3460% ( 52) 00:09:55.618 8267.618 - 8318.031: 87.5660% ( 40) 00:09:55.618 8318.031 - 8368.443: 87.8851% ( 58) 00:09:55.618 8368.443 - 8418.855: 88.2097% ( 59) 00:09:55.618 8418.855 - 8469.268: 88.3583% ( 27) 00:09:55.618 8469.268 - 8519.680: 88.5673% ( 38) 00:09:55.618 8519.680 - 8570.092: 88.7379% ( 31) 00:09:55.618 8570.092 - 8620.505: 88.9195% ( 33) 00:09:55.618 8620.505 - 8670.917: 89.0515% ( 24) 00:09:55.618 8670.917 - 8721.329: 89.2055% ( 28) 00:09:55.618 8721.329 - 8771.742: 89.3541% ( 27) 00:09:55.618 8771.742 - 8822.154: 89.5136% ( 29) 00:09:55.618 8822.154 - 8872.566: 89.7722% ( 47) 00:09:55.618 8872.566 - 8922.978: 90.0253% ( 46) 00:09:55.618 8922.978 - 8973.391: 90.1463% ( 22) 00:09:55.618 8973.391 - 9023.803: 90.2234% ( 14) 00:09:55.618 9023.803 - 9074.215: 90.3389% ( 21) 00:09:55.618 9074.215 - 9124.628: 90.5920% ( 46) 00:09:55.618 9124.628 - 9175.040: 90.8781% ( 52) 00:09:55.618 9175.040 - 9225.452: 91.0761% ( 36) 00:09:55.618 9225.452 - 9275.865: 91.2412% ( 30) 00:09:55.618 9275.865 - 9326.277: 91.4228% ( 33) 00:09:55.618 9326.277 - 9376.689: 91.6208% ( 36) 00:09:55.618 9376.689 - 9427.102: 91.8299% ( 38) 00:09:55.618 9427.102 - 9477.514: 91.9949% ( 30) 00:09:55.618 9477.514 - 9527.926: 92.1875% ( 35) 00:09:55.618 9527.926 - 9578.338: 92.7267% ( 98) 00:09:55.618 9578.338 - 9628.751: 92.9027% ( 32) 00:09:55.618 9628.751 - 9679.163: 93.0293% ( 23) 00:09:55.618 9679.163 - 9729.575: 93.1668% ( 25) 00:09:55.618 9729.575 - 9779.988: 93.2934% ( 23) 00:09:55.618 9779.988 - 9830.400: 93.3759% ( 15) 00:09:55.618 9830.400 - 9880.812: 93.4859% ( 20) 00:09:55.618 9880.812 - 9931.225: 93.5794% ( 17) 00:09:55.618 9931.225 - 9981.637: 93.6675% ( 16) 00:09:55.618 9981.637 - 10032.049: 93.7555% ( 16) 00:09:55.618 10032.049 - 10082.462: 93.8215% ( 12) 00:09:55.618 10082.462 - 10132.874: 93.9096% ( 16) 00:09:55.618 10132.874 - 10183.286: 93.9921% ( 15) 00:09:55.618 10183.286 - 10233.698: 94.0746% ( 15) 00:09:55.618 10233.698 - 10284.111: 94.1846% ( 20) 00:09:55.618 10284.111 - 10334.523: 94.2782% ( 17) 00:09:55.618 10334.523 - 10384.935: 94.3937% ( 21) 00:09:55.618 10384.935 - 10435.348: 94.4982% ( 19) 00:09:55.618 10435.348 - 10485.760: 94.5698% ( 13) 00:09:55.618 10485.760 - 10536.172: 94.6523% ( 15) 00:09:55.618 10536.172 - 10586.585: 94.7183% ( 12) 00:09:55.618 10586.585 - 10636.997: 94.8118% ( 17) 00:09:55.618 10636.997 - 10687.409: 94.9164% ( 19) 00:09:55.619 10687.409 - 10737.822: 95.0429% ( 23) 00:09:55.619 10737.822 - 10788.234: 95.1089% ( 12) 00:09:55.619 10788.234 - 10838.646: 95.1640% ( 10) 00:09:55.619 10838.646 - 10889.058: 95.2245% ( 11) 00:09:55.619 10889.058 - 10939.471: 95.2795% ( 10) 00:09:55.619 10939.471 - 10989.883: 95.3290% ( 9) 00:09:55.619 10989.883 - 11040.295: 95.3840% ( 10) 00:09:55.619 11040.295 - 11090.708: 95.4445% ( 11) 00:09:55.619 11090.708 - 11141.120: 95.5216% ( 14) 00:09:55.619 11141.120 - 11191.532: 95.5986% ( 14) 00:09:55.619 11191.532 - 11241.945: 95.6701% ( 13) 00:09:55.619 11241.945 - 11292.357: 95.7361% ( 12) 00:09:55.619 11292.357 - 11342.769: 95.8132% ( 14) 00:09:55.619 11342.769 - 11393.182: 95.9067% ( 17) 00:09:55.619 11393.182 - 11443.594: 96.0387% ( 24) 00:09:55.619 11443.594 - 11494.006: 96.3688% ( 60) 00:09:55.619 11494.006 - 11544.418: 96.6109% ( 44) 00:09:55.619 11544.418 - 11594.831: 96.9025% ( 53) 00:09:55.619 11594.831 - 11645.243: 97.0731% ( 31) 00:09:55.619 11645.243 - 11695.655: 97.1831% ( 20) 00:09:55.619 11695.655 - 11746.068: 97.2436% ( 11) 00:09:55.619 11746.068 - 11796.480: 97.3482% ( 19) 00:09:55.619 11796.480 - 11846.892: 97.4197% ( 13) 00:09:55.619 11846.892 - 11897.305: 97.4967% ( 14) 00:09:55.619 11897.305 - 11947.717: 97.5737% ( 14) 00:09:55.619 11947.717 - 11998.129: 97.6673% ( 17) 00:09:55.619 11998.129 - 12048.542: 97.9203% ( 46) 00:09:55.619 12048.542 - 12098.954: 98.0194% ( 18) 00:09:55.619 12098.954 - 12149.366: 98.0634% ( 8) 00:09:55.619 12149.366 - 12199.778: 98.1129% ( 9) 00:09:55.619 12199.778 - 12250.191: 98.1459% ( 6) 00:09:55.619 12250.191 - 12300.603: 98.2119% ( 12) 00:09:55.619 12300.603 - 12351.015: 98.2669% ( 10) 00:09:55.619 12351.015 - 12401.428: 98.3550% ( 16) 00:09:55.619 12401.428 - 12451.840: 98.4375% ( 15) 00:09:55.619 12451.840 - 12502.252: 98.5035% ( 12) 00:09:55.619 12502.252 - 12552.665: 98.5750% ( 13) 00:09:55.619 12552.665 - 12603.077: 98.6356% ( 11) 00:09:55.619 12603.077 - 12653.489: 98.7016% ( 12) 00:09:55.619 12653.489 - 12703.902: 98.7621% ( 11) 00:09:55.619 12703.902 - 12754.314: 98.8226% ( 11) 00:09:55.619 12754.314 - 12804.726: 98.8941% ( 13) 00:09:55.619 12804.726 - 12855.138: 98.9712% ( 14) 00:09:55.619 12855.138 - 12905.551: 99.0207% ( 9) 00:09:55.619 12905.551 - 13006.375: 99.0867% ( 12) 00:09:55.619 13006.375 - 13107.200: 99.1197% ( 6) 00:09:55.619 13107.200 - 13208.025: 99.1637% ( 8) 00:09:55.619 13208.025 - 13308.849: 99.2188% ( 10) 00:09:55.619 13308.849 - 13409.674: 99.2298% ( 2) 00:09:55.619 13409.674 - 13510.498: 99.2518% ( 4) 00:09:55.619 13510.498 - 13611.323: 99.2738% ( 4) 00:09:55.619 13611.323 - 13712.148: 99.2903% ( 3) 00:09:55.619 13712.148 - 13812.972: 99.2958% ( 1) 00:09:55.619 22988.012 - 23088.837: 99.3068% ( 2) 00:09:55.619 23088.837 - 23189.662: 99.3233% ( 3) 00:09:55.619 23189.662 - 23290.486: 99.3453% ( 4) 00:09:55.619 23290.486 - 23391.311: 99.3618% ( 3) 00:09:55.619 23391.311 - 23492.135: 99.3783% ( 3) 00:09:55.619 23492.135 - 23592.960: 99.3948% ( 3) 00:09:55.619 23592.960 - 23693.785: 99.4113% ( 3) 00:09:55.619 23693.785 - 23794.609: 99.4278% ( 3) 00:09:55.619 23794.609 - 23895.434: 99.4443% ( 3) 00:09:55.619 23895.434 - 23996.258: 99.4663% ( 4) 00:09:55.619 23996.258 - 24097.083: 99.4828% ( 3) 00:09:55.619 24097.083 - 24197.908: 99.4993% ( 3) 00:09:55.619 24197.908 - 24298.732: 99.5213% ( 4) 00:09:55.619 24298.732 - 24399.557: 99.5379% ( 3) 00:09:55.619 24399.557 - 24500.382: 99.5599% ( 4) 00:09:55.619 24500.382 - 24601.206: 99.5764% ( 3) 00:09:55.619 24601.206 - 24702.031: 99.5984% ( 4) 00:09:55.619 24702.031 - 24802.855: 99.6149% ( 3) 00:09:55.619 24802.855 - 24903.680: 99.6314% ( 3) 00:09:55.619 24903.680 - 25004.505: 99.6479% ( 3) 00:09:55.619 25004.505 - 25105.329: 99.6699% ( 4) 00:09:55.619 25105.329 - 25206.154: 99.6864% ( 3) 00:09:55.619 25206.154 - 25306.978: 99.7029% ( 3) 00:09:55.619 25306.978 - 25407.803: 99.7249% ( 4) 00:09:55.619 25407.803 - 25508.628: 99.7414% ( 3) 00:09:55.619 25508.628 - 25609.452: 99.7634% ( 4) 00:09:55.619 25609.452 - 25710.277: 99.7799% ( 3) 00:09:55.619 25710.277 - 25811.102: 99.8019% ( 4) 00:09:55.619 25811.102 - 26012.751: 99.8404% ( 7) 00:09:55.619 26012.751 - 26214.400: 99.8790% ( 7) 00:09:55.619 26214.400 - 26416.049: 99.9175% ( 7) 00:09:55.619 26416.049 - 26617.698: 99.9560% ( 7) 00:09:55.619 26617.698 - 26819.348: 99.9945% ( 7) 00:09:55.619 26819.348 - 27020.997: 100.0000% ( 1) 00:09:55.619 00:09:55.619 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:55.619 ============================================================================== 00:09:55.619 Range in us Cumulative IO count 00:09:55.619 5066.437 - 5091.643: 0.0055% ( 1) 00:09:55.619 5142.055 - 5167.262: 0.0110% ( 1) 00:09:55.619 5192.468 - 5217.674: 0.0165% ( 1) 00:09:55.619 5268.086 - 5293.292: 0.0275% ( 2) 00:09:55.619 5293.292 - 5318.498: 0.0440% ( 3) 00:09:55.619 5343.705 - 5368.911: 0.0605% ( 3) 00:09:55.619 5368.911 - 5394.117: 0.0770% ( 3) 00:09:55.619 5394.117 - 5419.323: 0.1155% ( 7) 00:09:55.619 5419.323 - 5444.529: 0.1375% ( 4) 00:09:55.619 5444.529 - 5469.735: 0.2036% ( 12) 00:09:55.619 5469.735 - 5494.942: 0.2916% ( 16) 00:09:55.619 5494.942 - 5520.148: 0.4016% ( 20) 00:09:55.619 5520.148 - 5545.354: 0.5062% ( 19) 00:09:55.619 5545.354 - 5570.560: 0.6272% ( 22) 00:09:55.619 5570.560 - 5595.766: 0.7427% ( 21) 00:09:55.619 5595.766 - 5620.972: 0.8693% ( 23) 00:09:55.619 5620.972 - 5646.178: 1.0123% ( 26) 00:09:55.620 5646.178 - 5671.385: 1.1884% ( 32) 00:09:55.620 5671.385 - 5696.591: 1.4470% ( 47) 00:09:55.620 5696.591 - 5721.797: 1.7771% ( 60) 00:09:55.620 5721.797 - 5747.003: 2.1402% ( 66) 00:09:55.620 5747.003 - 5772.209: 2.5693% ( 78) 00:09:55.620 5772.209 - 5797.415: 3.1085% ( 98) 00:09:55.620 5797.415 - 5822.622: 3.7247% ( 112) 00:09:55.620 5822.622 - 5847.828: 4.4509% ( 132) 00:09:55.620 5847.828 - 5873.034: 5.3367% ( 161) 00:09:55.620 5873.034 - 5898.240: 6.4371% ( 200) 00:09:55.620 5898.240 - 5923.446: 7.5704% ( 206) 00:09:55.620 5923.446 - 5948.652: 8.7808% ( 220) 00:09:55.620 5948.652 - 5973.858: 10.0737% ( 235) 00:09:55.620 5973.858 - 5999.065: 11.2731% ( 218) 00:09:55.620 5999.065 - 6024.271: 12.5055% ( 224) 00:09:55.620 6024.271 - 6049.477: 13.9470% ( 262) 00:09:55.620 6049.477 - 6074.683: 15.4269% ( 269) 00:09:55.620 6074.683 - 6099.889: 17.1325% ( 310) 00:09:55.620 6099.889 - 6125.095: 18.8600% ( 314) 00:09:55.620 6125.095 - 6150.302: 20.8352% ( 359) 00:09:55.620 6150.302 - 6175.508: 23.1844% ( 427) 00:09:55.620 6175.508 - 6200.714: 25.4676% ( 415) 00:09:55.620 6200.714 - 6225.920: 28.3616% ( 526) 00:09:55.620 6225.920 - 6251.126: 30.7713% ( 438) 00:09:55.620 6251.126 - 6276.332: 33.2581% ( 452) 00:09:55.620 6276.332 - 6301.538: 35.3378% ( 378) 00:09:55.620 6301.538 - 6326.745: 37.6981% ( 429) 00:09:55.620 6326.745 - 6351.951: 40.7130% ( 548) 00:09:55.620 6351.951 - 6377.157: 43.0953% ( 433) 00:09:55.620 6377.157 - 6402.363: 45.5381% ( 444) 00:09:55.620 6402.363 - 6427.569: 47.7883% ( 409) 00:09:55.620 6427.569 - 6452.775: 50.0055% ( 403) 00:09:55.620 6452.775 - 6503.188: 53.7962% ( 689) 00:09:55.620 6503.188 - 6553.600: 57.2458% ( 627) 00:09:55.620 6553.600 - 6604.012: 60.0572% ( 511) 00:09:55.620 6604.012 - 6654.425: 62.3074% ( 409) 00:09:55.620 6654.425 - 6704.837: 65.3444% ( 552) 00:09:55.620 6704.837 - 6755.249: 68.0348% ( 489) 00:09:55.620 6755.249 - 6805.662: 70.4500% ( 439) 00:09:55.620 6805.662 - 6856.074: 72.4142% ( 357) 00:09:55.620 6856.074 - 6906.486: 74.4498% ( 370) 00:09:55.620 6906.486 - 6956.898: 75.9408% ( 271) 00:09:55.620 6956.898 - 7007.311: 77.2117% ( 231) 00:09:55.620 7007.311 - 7057.723: 78.5101% ( 236) 00:09:55.620 7057.723 - 7108.135: 79.3684% ( 156) 00:09:55.620 7108.135 - 7158.548: 80.3477% ( 178) 00:09:55.620 7158.548 - 7208.960: 80.9364% ( 107) 00:09:55.620 7208.960 - 7259.372: 81.4096% ( 86) 00:09:55.620 7259.372 - 7309.785: 81.9762% ( 103) 00:09:55.620 7309.785 - 7360.197: 82.3889% ( 75) 00:09:55.620 7360.197 - 7410.609: 82.6750% ( 52) 00:09:55.620 7410.609 - 7461.022: 82.9445% ( 49) 00:09:55.620 7461.022 - 7511.434: 83.2526% ( 56) 00:09:55.620 7511.434 - 7561.846: 83.5607% ( 56) 00:09:55.620 7561.846 - 7612.258: 83.8193% ( 47) 00:09:55.620 7612.258 - 7662.671: 84.0284% ( 38) 00:09:55.620 7662.671 - 7713.083: 84.2485% ( 40) 00:09:55.620 7713.083 - 7763.495: 84.4575% ( 38) 00:09:55.620 7763.495 - 7813.908: 84.6391% ( 33) 00:09:55.620 7813.908 - 7864.320: 84.7986% ( 29) 00:09:55.620 7864.320 - 7914.732: 84.9692% ( 31) 00:09:55.620 7914.732 - 7965.145: 85.1728% ( 37) 00:09:55.620 7965.145 - 8015.557: 85.3488% ( 32) 00:09:55.620 8015.557 - 8065.969: 85.5194% ( 31) 00:09:55.620 8065.969 - 8116.382: 85.8770% ( 65) 00:09:55.620 8116.382 - 8166.794: 86.1906% ( 57) 00:09:55.620 8166.794 - 8217.206: 86.3831% ( 35) 00:09:55.620 8217.206 - 8267.618: 86.5537% ( 31) 00:09:55.620 8267.618 - 8318.031: 86.7243% ( 31) 00:09:55.620 8318.031 - 8368.443: 86.8783% ( 28) 00:09:55.620 8368.443 - 8418.855: 87.0764% ( 36) 00:09:55.620 8418.855 - 8469.268: 87.4230% ( 63) 00:09:55.620 8469.268 - 8519.680: 87.8796% ( 83) 00:09:55.620 8519.680 - 8570.092: 88.2978% ( 76) 00:09:55.620 8570.092 - 8620.505: 88.6499% ( 64) 00:09:55.620 8620.505 - 8670.917: 88.8644% ( 39) 00:09:55.620 8670.917 - 8721.329: 89.1065% ( 44) 00:09:55.620 8721.329 - 8771.742: 89.3541% ( 45) 00:09:55.620 8771.742 - 8822.154: 89.6292% ( 50) 00:09:55.620 8822.154 - 8872.566: 89.9043% ( 50) 00:09:55.620 8872.566 - 8922.978: 90.1298% ( 41) 00:09:55.620 8922.978 - 8973.391: 90.3059% ( 32) 00:09:55.620 8973.391 - 9023.803: 90.4654% ( 29) 00:09:55.620 9023.803 - 9074.215: 90.6250% ( 29) 00:09:55.620 9074.215 - 9124.628: 90.8616% ( 43) 00:09:55.620 9124.628 - 9175.040: 91.1367% ( 50) 00:09:55.620 9175.040 - 9225.452: 91.5603% ( 77) 00:09:55.620 9225.452 - 9275.865: 91.8134% ( 46) 00:09:55.620 9275.865 - 9326.277: 92.0169% ( 37) 00:09:55.620 9326.277 - 9376.689: 92.2260% ( 38) 00:09:55.620 9376.689 - 9427.102: 92.4351% ( 38) 00:09:55.620 9427.102 - 9477.514: 92.7212% ( 52) 00:09:55.620 9477.514 - 9527.926: 93.2989% ( 105) 00:09:55.620 9527.926 - 9578.338: 93.4804% ( 33) 00:09:55.620 9578.338 - 9628.751: 93.6400% ( 29) 00:09:55.620 9628.751 - 9679.163: 93.7830% ( 26) 00:09:55.620 9679.163 - 9729.575: 93.9096% ( 23) 00:09:55.620 9729.575 - 9779.988: 94.0141% ( 19) 00:09:55.620 9779.988 - 9830.400: 94.1021% ( 16) 00:09:55.620 9830.400 - 9880.812: 94.1956% ( 17) 00:09:55.620 9880.812 - 9931.225: 94.2672% ( 13) 00:09:55.620 9931.225 - 9981.637: 94.3387% ( 13) 00:09:55.620 9981.637 - 10032.049: 94.3882% ( 9) 00:09:55.620 10032.049 - 10082.462: 94.4212% ( 6) 00:09:55.620 10082.462 - 10132.874: 94.4707% ( 9) 00:09:55.620 10132.874 - 10183.286: 94.5533% ( 15) 00:09:55.620 10183.286 - 10233.698: 94.6523% ( 18) 00:09:55.620 10233.698 - 10284.111: 94.7733% ( 22) 00:09:55.620 10284.111 - 10334.523: 94.8669% ( 17) 00:09:55.620 10334.523 - 10384.935: 94.9494% ( 15) 00:09:55.620 10384.935 - 10435.348: 95.1309% ( 33) 00:09:55.620 10435.348 - 10485.760: 95.2300% ( 18) 00:09:55.620 10485.760 - 10536.172: 95.2905% ( 11) 00:09:55.620 10536.172 - 10586.585: 95.3345% ( 8) 00:09:55.620 10586.585 - 10636.997: 95.3785% ( 8) 00:09:55.621 10636.997 - 10687.409: 95.4225% ( 8) 00:09:55.621 10687.409 - 10737.822: 95.4610% ( 7) 00:09:55.621 10737.822 - 10788.234: 95.4996% ( 7) 00:09:55.621 10788.234 - 10838.646: 95.5656% ( 12) 00:09:55.621 10838.646 - 10889.058: 95.6316% ( 12) 00:09:55.621 10889.058 - 10939.471: 95.6811% ( 9) 00:09:55.621 10939.471 - 10989.883: 95.7581% ( 14) 00:09:55.621 10989.883 - 11040.295: 95.8187% ( 11) 00:09:55.621 11040.295 - 11090.708: 95.9067% ( 16) 00:09:55.621 11090.708 - 11141.120: 95.9672% ( 11) 00:09:55.621 11141.120 - 11191.532: 96.0442% ( 14) 00:09:55.621 11191.532 - 11241.945: 96.1268% ( 15) 00:09:55.621 11241.945 - 11292.357: 96.2093% ( 15) 00:09:55.621 11292.357 - 11342.769: 96.3138% ( 19) 00:09:55.621 11342.769 - 11393.182: 96.5119% ( 36) 00:09:55.621 11393.182 - 11443.594: 96.6934% ( 33) 00:09:55.621 11443.594 - 11494.006: 96.8750% ( 33) 00:09:55.621 11494.006 - 11544.418: 97.0456% ( 31) 00:09:55.621 11544.418 - 11594.831: 97.2161% ( 31) 00:09:55.621 11594.831 - 11645.243: 97.2931% ( 14) 00:09:55.621 11645.243 - 11695.655: 97.4252% ( 24) 00:09:55.621 11695.655 - 11746.068: 97.5572% ( 24) 00:09:55.621 11746.068 - 11796.480: 97.6287% ( 13) 00:09:55.621 11796.480 - 11846.892: 97.7003% ( 13) 00:09:55.621 11846.892 - 11897.305: 97.7773% ( 14) 00:09:55.621 11897.305 - 11947.717: 97.8543% ( 14) 00:09:55.621 11947.717 - 11998.129: 97.9203% ( 12) 00:09:55.621 11998.129 - 12048.542: 97.9588% ( 7) 00:09:55.621 12048.542 - 12098.954: 98.0304% ( 13) 00:09:55.621 12098.954 - 12149.366: 98.0964% ( 12) 00:09:55.621 12149.366 - 12199.778: 98.1679% ( 13) 00:09:55.621 12199.778 - 12250.191: 98.2504% ( 15) 00:09:55.621 12250.191 - 12300.603: 98.4045% ( 28) 00:09:55.621 12300.603 - 12351.015: 98.6521% ( 45) 00:09:55.621 12351.015 - 12401.428: 98.7071% ( 10) 00:09:55.621 12401.428 - 12451.840: 98.7951% ( 16) 00:09:55.621 12451.840 - 12502.252: 98.8666% ( 13) 00:09:55.621 12502.252 - 12552.665: 98.9437% ( 14) 00:09:55.621 12552.665 - 12603.077: 98.9987% ( 10) 00:09:55.621 12603.077 - 12653.489: 99.0867% ( 16) 00:09:55.621 12653.489 - 12703.902: 99.1472% ( 11) 00:09:55.621 12703.902 - 12754.314: 99.2188% ( 13) 00:09:55.621 12754.314 - 12804.726: 99.2518% ( 6) 00:09:55.621 12804.726 - 12855.138: 99.2848% ( 6) 00:09:55.621 12855.138 - 12905.551: 99.2903% ( 1) 00:09:55.621 12905.551 - 13006.375: 99.2958% ( 1) 00:09:55.621 22483.889 - 22584.714: 99.3068% ( 2) 00:09:55.621 22584.714 - 22685.538: 99.3288% ( 4) 00:09:55.621 22685.538 - 22786.363: 99.3453% ( 3) 00:09:55.621 22786.363 - 22887.188: 99.3618% ( 3) 00:09:55.621 22887.188 - 22988.012: 99.3783% ( 3) 00:09:55.621 22988.012 - 23088.837: 99.3948% ( 3) 00:09:55.621 23088.837 - 23189.662: 99.4113% ( 3) 00:09:55.621 23189.662 - 23290.486: 99.4278% ( 3) 00:09:55.621 23290.486 - 23391.311: 99.4443% ( 3) 00:09:55.621 23391.311 - 23492.135: 99.4663% ( 4) 00:09:55.621 23492.135 - 23592.960: 99.4828% ( 3) 00:09:55.621 23592.960 - 23693.785: 99.4993% ( 3) 00:09:55.621 23693.785 - 23794.609: 99.5158% ( 3) 00:09:55.621 23794.609 - 23895.434: 99.5324% ( 3) 00:09:55.621 23895.434 - 23996.258: 99.5489% ( 3) 00:09:55.621 23996.258 - 24097.083: 99.5654% ( 3) 00:09:55.621 24097.083 - 24197.908: 99.5874% ( 4) 00:09:55.621 24197.908 - 24298.732: 99.6094% ( 4) 00:09:55.621 24298.732 - 24399.557: 99.6259% ( 3) 00:09:55.621 24399.557 - 24500.382: 99.6479% ( 4) 00:09:55.621 24500.382 - 24601.206: 99.6699% ( 4) 00:09:55.621 24601.206 - 24702.031: 99.6864% ( 3) 00:09:55.621 24702.031 - 24802.855: 99.7029% ( 3) 00:09:55.621 24802.855 - 24903.680: 99.7249% ( 4) 00:09:55.621 24903.680 - 25004.505: 99.7469% ( 4) 00:09:55.621 25004.505 - 25105.329: 99.7634% ( 3) 00:09:55.621 25105.329 - 25206.154: 99.7854% ( 4) 00:09:55.621 25206.154 - 25306.978: 99.8074% ( 4) 00:09:55.621 25306.978 - 25407.803: 99.8239% ( 3) 00:09:55.621 25407.803 - 25508.628: 99.8460% ( 4) 00:09:55.621 25508.628 - 25609.452: 99.8680% ( 4) 00:09:55.621 25609.452 - 25710.277: 99.8900% ( 4) 00:09:55.621 25710.277 - 25811.102: 99.9065% ( 3) 00:09:55.621 25811.102 - 26012.751: 99.9450% ( 7) 00:09:55.621 26012.751 - 26214.400: 99.9780% ( 6) 00:09:55.621 26214.400 - 26416.049: 100.0000% ( 4) 00:09:55.621 00:09:55.621 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:55.621 ============================================================================== 00:09:55.621 Range in us Cumulative IO count 00:09:55.621 5041.231 - 5066.437: 0.0055% ( 1) 00:09:55.621 5091.643 - 5116.849: 0.0110% ( 1) 00:09:55.621 5142.055 - 5167.262: 0.0165% ( 1) 00:09:55.621 5192.468 - 5217.674: 0.0220% ( 1) 00:09:55.621 5217.674 - 5242.880: 0.0275% ( 1) 00:09:55.621 5242.880 - 5268.086: 0.0385% ( 2) 00:09:55.621 5268.086 - 5293.292: 0.0440% ( 1) 00:09:55.621 5293.292 - 5318.498: 0.0495% ( 1) 00:09:55.621 5318.498 - 5343.705: 0.0660% ( 3) 00:09:55.621 5343.705 - 5368.911: 0.0715% ( 1) 00:09:55.621 5368.911 - 5394.117: 0.0880% ( 3) 00:09:55.621 5394.117 - 5419.323: 0.0990% ( 2) 00:09:55.621 5419.323 - 5444.529: 0.1375% ( 7) 00:09:55.621 5444.529 - 5469.735: 0.1651% ( 5) 00:09:55.621 5469.735 - 5494.942: 0.2201% ( 10) 00:09:55.621 5494.942 - 5520.148: 0.2806% ( 11) 00:09:55.621 5520.148 - 5545.354: 0.3796% ( 18) 00:09:55.621 5545.354 - 5570.560: 0.4787% ( 18) 00:09:55.621 5570.560 - 5595.766: 0.5997% ( 22) 00:09:55.621 5595.766 - 5620.972: 0.7372% ( 25) 00:09:55.621 5620.972 - 5646.178: 0.8803% ( 26) 00:09:55.621 5646.178 - 5671.385: 1.1169% ( 43) 00:09:55.621 5671.385 - 5696.591: 1.3094% ( 35) 00:09:55.621 5696.591 - 5721.797: 1.6065% ( 54) 00:09:55.621 5721.797 - 5747.003: 1.9751% ( 67) 00:09:55.621 5747.003 - 5772.209: 2.3658% ( 71) 00:09:55.621 5772.209 - 5797.415: 2.8609% ( 90) 00:09:55.621 5797.415 - 5822.622: 3.4881% ( 114) 00:09:55.622 5822.622 - 5847.828: 4.2254% ( 134) 00:09:55.622 5847.828 - 5873.034: 5.0341% ( 147) 00:09:55.622 5873.034 - 5898.240: 5.9309% ( 163) 00:09:55.622 5898.240 - 5923.446: 6.9432% ( 184) 00:09:55.622 5923.446 - 5948.652: 8.1151% ( 213) 00:09:55.622 5948.652 - 5973.858: 9.4465% ( 242) 00:09:55.622 5973.858 - 5999.065: 10.8990% ( 264) 00:09:55.622 5999.065 - 6024.271: 12.7146% ( 330) 00:09:55.622 6024.271 - 6049.477: 14.2110% ( 272) 00:09:55.622 6049.477 - 6074.683: 15.8011% ( 289) 00:09:55.622 6074.683 - 6099.889: 17.4681% ( 303) 00:09:55.622 6099.889 - 6125.095: 19.3167% ( 336) 00:09:55.622 6125.095 - 6150.302: 21.0002% ( 306) 00:09:55.622 6150.302 - 6175.508: 22.7883% ( 325) 00:09:55.622 6175.508 - 6200.714: 24.8680% ( 378) 00:09:55.622 6200.714 - 6225.920: 27.3107% ( 444) 00:09:55.622 6225.920 - 6251.126: 29.5004% ( 398) 00:09:55.622 6251.126 - 6276.332: 32.1798% ( 487) 00:09:55.622 6276.332 - 6301.538: 34.5566% ( 432) 00:09:55.622 6301.538 - 6326.745: 37.1259% ( 467) 00:09:55.622 6326.745 - 6351.951: 39.8823% ( 501) 00:09:55.622 6351.951 - 6377.157: 42.4681% ( 470) 00:09:55.622 6377.157 - 6402.363: 44.8283% ( 429) 00:09:55.622 6402.363 - 6427.569: 47.3316% ( 455) 00:09:55.622 6427.569 - 6452.775: 49.5874% ( 410) 00:09:55.622 6452.775 - 6503.188: 53.8512% ( 775) 00:09:55.622 6503.188 - 6553.600: 57.0753% ( 586) 00:09:55.622 6553.600 - 6604.012: 59.4520% ( 432) 00:09:55.622 6604.012 - 6654.425: 61.8343% ( 433) 00:09:55.622 6654.425 - 6704.837: 64.2331% ( 436) 00:09:55.622 6704.837 - 6755.249: 66.7088% ( 450) 00:09:55.622 6755.249 - 6805.662: 68.8765% ( 394) 00:09:55.622 6805.662 - 6856.074: 71.0827% ( 401) 00:09:55.622 6856.074 - 6906.486: 72.8213% ( 316) 00:09:55.622 6906.486 - 6956.898: 74.3178% ( 272) 00:09:55.622 6956.898 - 7007.311: 75.5832% ( 230) 00:09:55.622 7007.311 - 7057.723: 76.6835% ( 200) 00:09:55.622 7057.723 - 7108.135: 77.7619% ( 196) 00:09:55.622 7108.135 - 7158.548: 78.7577% ( 181) 00:09:55.622 7158.548 - 7208.960: 80.1111% ( 246) 00:09:55.622 7208.960 - 7259.372: 80.9199% ( 147) 00:09:55.622 7259.372 - 7309.785: 81.4591% ( 98) 00:09:55.622 7309.785 - 7360.197: 81.8607% ( 73) 00:09:55.622 7360.197 - 7410.609: 82.2018% ( 62) 00:09:55.622 7410.609 - 7461.022: 82.4934% ( 53) 00:09:55.622 7461.022 - 7511.434: 82.8235% ( 60) 00:09:55.622 7511.434 - 7561.846: 83.1426% ( 58) 00:09:55.622 7561.846 - 7612.258: 83.4727% ( 60) 00:09:55.622 7612.258 - 7662.671: 83.7533% ( 51) 00:09:55.622 7662.671 - 7713.083: 84.0889% ( 61) 00:09:55.622 7713.083 - 7763.495: 84.5511% ( 84) 00:09:55.622 7763.495 - 7813.908: 84.9142% ( 66) 00:09:55.622 7813.908 - 7864.320: 85.2993% ( 70) 00:09:55.622 7864.320 - 7914.732: 85.7724% ( 86) 00:09:55.622 7914.732 - 7965.145: 86.1741% ( 73) 00:09:55.622 7965.145 - 8015.557: 86.4767% ( 55) 00:09:55.622 8015.557 - 8065.969: 87.0434% ( 103) 00:09:55.622 8065.969 - 8116.382: 87.3019% ( 47) 00:09:55.622 8116.382 - 8166.794: 87.4670% ( 30) 00:09:55.622 8166.794 - 8217.206: 87.6761% ( 38) 00:09:55.622 8217.206 - 8267.618: 88.1052% ( 78) 00:09:55.622 8267.618 - 8318.031: 88.2812% ( 32) 00:09:55.622 8318.031 - 8368.443: 88.4298% ( 27) 00:09:55.622 8368.443 - 8418.855: 88.5838% ( 28) 00:09:55.622 8418.855 - 8469.268: 88.7214% ( 25) 00:09:55.622 8469.268 - 8519.680: 88.8589% ( 25) 00:09:55.622 8519.680 - 8570.092: 89.0900% ( 42) 00:09:55.622 8570.092 - 8620.505: 89.3156% ( 41) 00:09:55.622 8620.505 - 8670.917: 89.5687% ( 46) 00:09:55.622 8670.917 - 8721.329: 89.7557% ( 34) 00:09:55.622 8721.329 - 8771.742: 89.9098% ( 28) 00:09:55.622 8771.742 - 8822.154: 90.0748% ( 30) 00:09:55.622 8822.154 - 8872.566: 90.2344% ( 29) 00:09:55.622 8872.566 - 8922.978: 90.4324% ( 36) 00:09:55.622 8922.978 - 8973.391: 90.6910% ( 47) 00:09:55.622 8973.391 - 9023.803: 90.8616% ( 31) 00:09:55.622 9023.803 - 9074.215: 91.0541% ( 35) 00:09:55.622 9074.215 - 9124.628: 91.3127% ( 47) 00:09:55.622 9124.628 - 9175.040: 91.4723% ( 29) 00:09:55.622 9175.040 - 9225.452: 91.7694% ( 54) 00:09:55.622 9225.452 - 9275.865: 91.8794% ( 20) 00:09:55.622 9275.865 - 9326.277: 91.9949% ( 21) 00:09:55.622 9326.277 - 9376.689: 92.2590% ( 48) 00:09:55.622 9376.689 - 9427.102: 92.3801% ( 22) 00:09:55.622 9427.102 - 9477.514: 92.5231% ( 26) 00:09:55.622 9477.514 - 9527.926: 92.6717% ( 27) 00:09:55.622 9527.926 - 9578.338: 92.8037% ( 24) 00:09:55.622 9578.338 - 9628.751: 92.9632% ( 29) 00:09:55.622 9628.751 - 9679.163: 93.1393% ( 32) 00:09:55.622 9679.163 - 9729.575: 93.4364% ( 54) 00:09:55.622 9729.575 - 9779.988: 93.5684% ( 24) 00:09:55.622 9779.988 - 9830.400: 93.6730% ( 19) 00:09:55.622 9830.400 - 9880.812: 93.7720% ( 18) 00:09:55.623 9880.812 - 9931.225: 93.8820% ( 20) 00:09:55.623 9931.225 - 9981.637: 93.9646% ( 15) 00:09:55.623 9981.637 - 10032.049: 94.0911% ( 23) 00:09:55.623 10032.049 - 10082.462: 94.2011% ( 20) 00:09:55.623 10082.462 - 10132.874: 94.3332% ( 24) 00:09:55.623 10132.874 - 10183.286: 94.4652% ( 24) 00:09:55.623 10183.286 - 10233.698: 94.6083% ( 26) 00:09:55.623 10233.698 - 10284.111: 94.7238% ( 21) 00:09:55.623 10284.111 - 10334.523: 94.9549% ( 42) 00:09:55.623 10334.523 - 10384.935: 95.1364% ( 33) 00:09:55.623 10384.935 - 10435.348: 95.2245% ( 16) 00:09:55.623 10435.348 - 10485.760: 95.3290% ( 19) 00:09:55.623 10485.760 - 10536.172: 95.4610% ( 24) 00:09:55.623 10536.172 - 10586.585: 95.5876% ( 23) 00:09:55.623 10586.585 - 10636.997: 95.7306% ( 26) 00:09:55.623 10636.997 - 10687.409: 95.8462% ( 21) 00:09:55.623 10687.409 - 10737.822: 95.9782% ( 24) 00:09:55.623 10737.822 - 10788.234: 96.1268% ( 27) 00:09:55.623 10788.234 - 10838.646: 96.2808% ( 28) 00:09:55.623 10838.646 - 10889.058: 96.4459% ( 30) 00:09:55.623 10889.058 - 10939.471: 96.5999% ( 28) 00:09:55.623 10939.471 - 10989.883: 96.7099% ( 20) 00:09:55.623 10989.883 - 11040.295: 96.8310% ( 22) 00:09:55.623 11040.295 - 11090.708: 97.0125% ( 33) 00:09:55.623 11090.708 - 11141.120: 97.1006% ( 16) 00:09:55.623 11141.120 - 11191.532: 97.1666% ( 12) 00:09:55.623 11191.532 - 11241.945: 97.2326% ( 12) 00:09:55.623 11241.945 - 11292.357: 97.2931% ( 11) 00:09:55.623 11292.357 - 11342.769: 97.3482% ( 10) 00:09:55.623 11342.769 - 11393.182: 97.3977% ( 9) 00:09:55.623 11393.182 - 11443.594: 97.4472% ( 9) 00:09:55.623 11443.594 - 11494.006: 97.5077% ( 11) 00:09:55.623 11494.006 - 11544.418: 97.5462% ( 7) 00:09:55.623 11544.418 - 11594.831: 97.6122% ( 12) 00:09:55.623 11594.831 - 11645.243: 97.6618% ( 9) 00:09:55.623 11645.243 - 11695.655: 97.7113% ( 9) 00:09:55.623 11695.655 - 11746.068: 97.7553% ( 8) 00:09:55.623 11746.068 - 11796.480: 97.7993% ( 8) 00:09:55.623 11796.480 - 11846.892: 97.8598% ( 11) 00:09:55.623 11846.892 - 11897.305: 97.9203% ( 11) 00:09:55.623 11897.305 - 11947.717: 97.9974% ( 14) 00:09:55.623 11947.717 - 11998.129: 98.0744% ( 14) 00:09:55.623 11998.129 - 12048.542: 98.1404% ( 12) 00:09:55.623 12048.542 - 12098.954: 98.2119% ( 13) 00:09:55.623 12098.954 - 12149.366: 98.2779% ( 12) 00:09:55.623 12149.366 - 12199.778: 98.3275% ( 9) 00:09:55.623 12199.778 - 12250.191: 98.3715% ( 8) 00:09:55.623 12250.191 - 12300.603: 98.4375% ( 12) 00:09:55.623 12300.603 - 12351.015: 98.4760% ( 7) 00:09:55.623 12351.015 - 12401.428: 98.5365% ( 11) 00:09:55.623 12401.428 - 12451.840: 98.5750% ( 7) 00:09:55.623 12451.840 - 12502.252: 98.6191% ( 8) 00:09:55.623 12502.252 - 12552.665: 98.6631% ( 8) 00:09:55.623 12552.665 - 12603.077: 98.6906% ( 5) 00:09:55.623 12603.077 - 12653.489: 98.7016% ( 2) 00:09:55.623 12653.489 - 12703.902: 98.7291% ( 5) 00:09:55.623 12703.902 - 12754.314: 98.7456% ( 3) 00:09:55.623 12754.314 - 12804.726: 98.7566% ( 2) 00:09:55.623 12804.726 - 12855.138: 98.7731% ( 3) 00:09:55.623 12855.138 - 12905.551: 98.7841% ( 2) 00:09:55.623 12905.551 - 13006.375: 98.8116% ( 5) 00:09:55.623 13006.375 - 13107.200: 98.8281% ( 3) 00:09:55.623 13107.200 - 13208.025: 98.8886% ( 11) 00:09:55.623 13208.025 - 13308.849: 99.0702% ( 33) 00:09:55.623 13308.849 - 13409.674: 99.1197% ( 9) 00:09:55.623 13409.674 - 13510.498: 99.1527% ( 6) 00:09:55.623 13510.498 - 13611.323: 99.1802% ( 5) 00:09:55.623 13611.323 - 13712.148: 99.2188% ( 7) 00:09:55.623 13712.148 - 13812.972: 99.2573% ( 7) 00:09:55.623 13812.972 - 13913.797: 99.2903% ( 6) 00:09:55.623 13913.797 - 14014.622: 99.2958% ( 1) 00:09:55.623 22887.188 - 22988.012: 99.3068% ( 2) 00:09:55.623 22988.012 - 23088.837: 99.3343% ( 5) 00:09:55.623 23088.837 - 23189.662: 99.3673% ( 6) 00:09:55.623 23189.662 - 23290.486: 99.4278% ( 11) 00:09:55.623 23290.486 - 23391.311: 99.4938% ( 12) 00:09:55.623 23391.311 - 23492.135: 99.5599% ( 12) 00:09:55.623 23492.135 - 23592.960: 99.6534% ( 17) 00:09:55.623 23592.960 - 23693.785: 99.7359% ( 15) 00:09:55.623 23693.785 - 23794.609: 99.7634% ( 5) 00:09:55.623 23794.609 - 23895.434: 99.7799% ( 3) 00:09:55.623 23895.434 - 23996.258: 99.7964% ( 3) 00:09:55.623 23996.258 - 24097.083: 99.8129% ( 3) 00:09:55.623 24097.083 - 24197.908: 99.8294% ( 3) 00:09:55.623 24197.908 - 24298.732: 99.8460% ( 3) 00:09:55.623 24298.732 - 24399.557: 99.8625% ( 3) 00:09:55.623 24399.557 - 24500.382: 99.8790% ( 3) 00:09:55.623 24500.382 - 24601.206: 99.8900% ( 2) 00:09:55.623 24601.206 - 24702.031: 99.9120% ( 4) 00:09:55.623 24702.031 - 24802.855: 99.9285% ( 3) 00:09:55.623 24802.855 - 24903.680: 99.9450% ( 3) 00:09:55.623 24903.680 - 25004.505: 99.9615% ( 3) 00:09:55.623 25004.505 - 25105.329: 99.9780% ( 3) 00:09:55.623 25105.329 - 25206.154: 99.9945% ( 3) 00:09:55.623 25206.154 - 25306.978: 100.0000% ( 1) 00:09:55.623 00:09:55.624 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:55.624 ============================================================================== 00:09:55.624 Range in us Cumulative IO count 00:09:55.624 5167.262 - 5192.468: 0.0055% ( 1) 00:09:55.624 5192.468 - 5217.674: 0.0110% ( 1) 00:09:55.624 5242.880 - 5268.086: 0.0220% ( 2) 00:09:55.624 5343.705 - 5368.911: 0.0440% ( 4) 00:09:55.624 5368.911 - 5394.117: 0.0605% ( 3) 00:09:55.624 5394.117 - 5419.323: 0.0715% ( 2) 00:09:55.624 5419.323 - 5444.529: 0.1100% ( 7) 00:09:55.624 5444.529 - 5469.735: 0.1265% ( 3) 00:09:55.624 5469.735 - 5494.942: 0.1981% ( 13) 00:09:55.624 5494.942 - 5520.148: 0.2861% ( 16) 00:09:55.624 5520.148 - 5545.354: 0.3741% ( 16) 00:09:55.624 5545.354 - 5570.560: 0.4842% ( 20) 00:09:55.624 5570.560 - 5595.766: 0.6382% ( 28) 00:09:55.624 5595.766 - 5620.972: 0.8198% ( 33) 00:09:55.624 5620.972 - 5646.178: 1.0288% ( 38) 00:09:55.624 5646.178 - 5671.385: 1.2269% ( 36) 00:09:55.624 5671.385 - 5696.591: 1.4800% ( 46) 00:09:55.624 5696.591 - 5721.797: 1.7551% ( 50) 00:09:55.624 5721.797 - 5747.003: 2.0522% ( 54) 00:09:55.624 5747.003 - 5772.209: 2.3603% ( 56) 00:09:55.624 5772.209 - 5797.415: 2.7564% ( 72) 00:09:55.624 5797.415 - 5822.622: 3.2515% ( 90) 00:09:55.624 5822.622 - 5847.828: 3.8567% ( 110) 00:09:55.624 5847.828 - 5873.034: 4.5445% ( 125) 00:09:55.624 5873.034 - 5898.240: 5.2762% ( 133) 00:09:55.624 5898.240 - 5923.446: 6.3435% ( 194) 00:09:55.624 5923.446 - 5948.652: 7.5044% ( 211) 00:09:55.624 5948.652 - 5973.858: 8.8523% ( 245) 00:09:55.624 5973.858 - 5999.065: 10.2718% ( 258) 00:09:55.624 5999.065 - 6024.271: 11.9608% ( 307) 00:09:55.624 6024.271 - 6049.477: 13.5288% ( 285) 00:09:55.624 6049.477 - 6074.683: 15.0418% ( 275) 00:09:55.624 6074.683 - 6099.889: 16.5383% ( 272) 00:09:55.624 6099.889 - 6125.095: 18.3539% ( 330) 00:09:55.624 6125.095 - 6150.302: 20.2245% ( 340) 00:09:55.624 6150.302 - 6175.508: 22.4197% ( 399) 00:09:55.624 6175.508 - 6200.714: 24.5599% ( 389) 00:09:55.624 6200.714 - 6225.920: 26.9916% ( 442) 00:09:55.624 6225.920 - 6251.126: 29.2364% ( 408) 00:09:55.624 6251.126 - 6276.332: 31.7892% ( 464) 00:09:55.624 6276.332 - 6301.538: 34.0009% ( 402) 00:09:55.624 6301.538 - 6326.745: 36.7132% ( 493) 00:09:55.624 6326.745 - 6351.951: 38.9525% ( 407) 00:09:55.624 6351.951 - 6377.157: 41.3457% ( 435) 00:09:55.624 6377.157 - 6402.363: 44.0361% ( 489) 00:09:55.624 6402.363 - 6427.569: 47.5792% ( 644) 00:09:55.624 6427.569 - 6452.775: 50.4016% ( 513) 00:09:55.624 6452.775 - 6503.188: 54.7810% ( 796) 00:09:55.624 6503.188 - 6553.600: 58.0161% ( 588) 00:09:55.624 6553.600 - 6604.012: 60.4533% ( 443) 00:09:55.624 6604.012 - 6654.425: 63.2317% ( 505) 00:09:55.624 6654.425 - 6704.837: 65.3224% ( 380) 00:09:55.624 6704.837 - 6755.249: 67.2975% ( 359) 00:09:55.624 6755.249 - 6805.662: 69.3827% ( 379) 00:09:55.624 6805.662 - 6856.074: 71.1268% ( 317) 00:09:55.624 6856.074 - 6906.486: 73.0139% ( 343) 00:09:55.624 6906.486 - 6956.898: 74.5764% ( 284) 00:09:55.624 6956.898 - 7007.311: 76.2379% ( 302) 00:09:55.624 7007.311 - 7057.723: 77.2392% ( 182) 00:09:55.624 7057.723 - 7108.135: 78.4551% ( 221) 00:09:55.624 7108.135 - 7158.548: 79.4014% ( 172) 00:09:55.624 7158.548 - 7208.960: 79.9736% ( 104) 00:09:55.624 7208.960 - 7259.372: 80.4027% ( 78) 00:09:55.624 7259.372 - 7309.785: 80.7934% ( 71) 00:09:55.624 7309.785 - 7360.197: 81.2445% ( 82) 00:09:55.624 7360.197 - 7410.609: 81.7672% ( 95) 00:09:55.624 7410.609 - 7461.022: 82.1743% ( 74) 00:09:55.624 7461.022 - 7511.434: 82.6089% ( 79) 00:09:55.624 7511.434 - 7561.846: 83.1756% ( 103) 00:09:55.624 7561.846 - 7612.258: 83.5993% ( 77) 00:09:55.624 7612.258 - 7662.671: 83.9459% ( 63) 00:09:55.624 7662.671 - 7713.083: 84.1879% ( 44) 00:09:55.624 7713.083 - 7763.495: 84.4465% ( 47) 00:09:55.624 7763.495 - 7813.908: 84.7326% ( 52) 00:09:55.624 7813.908 - 7864.320: 84.9802% ( 45) 00:09:55.624 7864.320 - 7914.732: 85.2058% ( 41) 00:09:55.624 7914.732 - 7965.145: 85.4974% ( 53) 00:09:55.624 7965.145 - 8015.557: 85.9925% ( 90) 00:09:55.624 8015.557 - 8065.969: 86.3336% ( 62) 00:09:55.624 8065.969 - 8116.382: 86.5372% ( 37) 00:09:55.624 8116.382 - 8166.794: 86.7518% ( 39) 00:09:55.624 8166.794 - 8217.206: 86.9718% ( 40) 00:09:55.624 8217.206 - 8267.618: 87.1809% ( 38) 00:09:55.624 8267.618 - 8318.031: 87.4450% ( 48) 00:09:55.624 8318.031 - 8368.443: 87.6981% ( 46) 00:09:55.624 8368.443 - 8418.855: 87.9346% ( 43) 00:09:55.624 8418.855 - 8469.268: 88.2592% ( 59) 00:09:55.624 8469.268 - 8519.680: 88.4188% ( 29) 00:09:55.624 8519.680 - 8570.092: 88.6004% ( 33) 00:09:55.624 8570.092 - 8620.505: 88.7764% ( 32) 00:09:55.624 8620.505 - 8670.917: 88.9305% ( 28) 00:09:55.624 8670.917 - 8721.329: 89.2165% ( 52) 00:09:55.624 8721.329 - 8771.742: 89.5467% ( 60) 00:09:55.624 8771.742 - 8822.154: 89.7612% ( 39) 00:09:55.624 8822.154 - 8872.566: 90.0143% ( 46) 00:09:55.624 8872.566 - 8922.978: 90.3389% ( 59) 00:09:55.624 8922.978 - 8973.391: 90.7956% ( 83) 00:09:55.624 8973.391 - 9023.803: 91.0706% ( 50) 00:09:55.624 9023.803 - 9074.215: 91.2632% ( 35) 00:09:55.624 9074.215 - 9124.628: 91.4723% ( 38) 00:09:55.624 9124.628 - 9175.040: 91.6593% ( 34) 00:09:55.624 9175.040 - 9225.452: 91.8464% ( 34) 00:09:55.624 9225.452 - 9275.865: 92.2700% ( 77) 00:09:55.624 9275.865 - 9326.277: 92.5286% ( 47) 00:09:55.624 9326.277 - 9376.689: 92.7377% ( 38) 00:09:55.624 9376.689 - 9427.102: 92.9082% ( 31) 00:09:55.624 9427.102 - 9477.514: 93.0348% ( 23) 00:09:55.624 9477.514 - 9527.926: 93.1448% ( 20) 00:09:55.624 9527.926 - 9578.338: 93.2438% ( 18) 00:09:55.624 9578.338 - 9628.751: 93.3649% ( 22) 00:09:55.624 9628.751 - 9679.163: 93.4474% ( 15) 00:09:55.624 9679.163 - 9729.575: 93.5629% ( 21) 00:09:55.624 9729.575 - 9779.988: 93.7390% ( 32) 00:09:55.624 9779.988 - 9830.400: 94.0636% ( 59) 00:09:55.625 9830.400 - 9880.812: 94.1736% ( 20) 00:09:55.625 9880.812 - 9931.225: 94.2782% ( 19) 00:09:55.625 9931.225 - 9981.637: 94.3607% ( 15) 00:09:55.625 9981.637 - 10032.049: 94.4982% ( 25) 00:09:55.625 10032.049 - 10082.462: 94.6633% ( 30) 00:09:55.625 10082.462 - 10132.874: 94.8889% ( 41) 00:09:55.625 10132.874 - 10183.286: 94.9879% ( 18) 00:09:55.625 10183.286 - 10233.698: 95.0759% ( 16) 00:09:55.625 10233.698 - 10284.111: 95.1364% ( 11) 00:09:55.625 10284.111 - 10334.523: 95.2080% ( 13) 00:09:55.625 10334.523 - 10384.935: 95.2850% ( 14) 00:09:55.625 10384.935 - 10435.348: 95.3950% ( 20) 00:09:55.625 10435.348 - 10485.760: 95.4776% ( 15) 00:09:55.625 10485.760 - 10536.172: 95.5876% ( 20) 00:09:55.625 10536.172 - 10586.585: 95.6701% ( 15) 00:09:55.625 10586.585 - 10636.997: 95.7691% ( 18) 00:09:55.625 10636.997 - 10687.409: 95.8462% ( 14) 00:09:55.625 10687.409 - 10737.822: 95.9727% ( 23) 00:09:55.625 10737.822 - 10788.234: 96.1433% ( 31) 00:09:55.625 10788.234 - 10838.646: 96.1983% ( 10) 00:09:55.625 10838.646 - 10889.058: 96.2533% ( 10) 00:09:55.625 10889.058 - 10939.471: 96.3193% ( 12) 00:09:55.625 10939.471 - 10989.883: 96.3798% ( 11) 00:09:55.625 10989.883 - 11040.295: 96.4624% ( 15) 00:09:55.625 11040.295 - 11090.708: 96.5449% ( 15) 00:09:55.625 11090.708 - 11141.120: 96.6274% ( 15) 00:09:55.625 11141.120 - 11191.532: 96.7210% ( 17) 00:09:55.625 11191.532 - 11241.945: 96.8255% ( 19) 00:09:55.625 11241.945 - 11292.357: 96.9190% ( 17) 00:09:55.625 11292.357 - 11342.769: 96.9850% ( 12) 00:09:55.625 11342.769 - 11393.182: 97.0951% ( 20) 00:09:55.625 11393.182 - 11443.594: 97.1996% ( 19) 00:09:55.625 11443.594 - 11494.006: 97.3316% ( 24) 00:09:55.625 11494.006 - 11544.418: 97.4802% ( 27) 00:09:55.625 11544.418 - 11594.831: 97.6232% ( 26) 00:09:55.625 11594.831 - 11645.243: 97.8543% ( 42) 00:09:55.625 11645.243 - 11695.655: 97.9974% ( 26) 00:09:55.625 11695.655 - 11746.068: 98.1074% ( 20) 00:09:55.625 11746.068 - 11796.480: 98.1679% ( 11) 00:09:55.625 11796.480 - 11846.892: 98.2174% ( 9) 00:09:55.625 11846.892 - 11897.305: 98.2669% ( 9) 00:09:55.625 11897.305 - 11947.717: 98.3055% ( 7) 00:09:55.625 11947.717 - 11998.129: 98.3275% ( 4) 00:09:55.625 11998.129 - 12048.542: 98.3550% ( 5) 00:09:55.625 12048.542 - 12098.954: 98.3770% ( 4) 00:09:55.625 12098.954 - 12149.366: 98.4045% ( 5) 00:09:55.625 12149.366 - 12199.778: 98.4155% ( 2) 00:09:55.625 12199.778 - 12250.191: 98.4210% ( 1) 00:09:55.625 12250.191 - 12300.603: 98.4430% ( 4) 00:09:55.625 12300.603 - 12351.015: 98.4815% ( 7) 00:09:55.625 12351.015 - 12401.428: 98.5145% ( 6) 00:09:55.625 12401.428 - 12451.840: 98.5365% ( 4) 00:09:55.625 12451.840 - 12502.252: 98.5695% ( 6) 00:09:55.625 12502.252 - 12552.665: 98.6521% ( 15) 00:09:55.625 12552.665 - 12603.077: 98.7181% ( 12) 00:09:55.625 12603.077 - 12653.489: 98.7896% ( 13) 00:09:55.625 12653.489 - 12703.902: 98.8446% ( 10) 00:09:55.625 12703.902 - 12754.314: 98.8721% ( 5) 00:09:55.625 12754.314 - 12804.726: 98.8996% ( 5) 00:09:55.625 12804.726 - 12855.138: 98.9217% ( 4) 00:09:55.625 12855.138 - 12905.551: 98.9382% ( 3) 00:09:55.625 12905.551 - 13006.375: 98.9932% ( 10) 00:09:55.625 13006.375 - 13107.200: 99.0537% ( 11) 00:09:55.625 13107.200 - 13208.025: 99.1142% ( 11) 00:09:55.625 13208.025 - 13308.849: 99.1527% ( 7) 00:09:55.625 13308.849 - 13409.674: 99.1857% ( 6) 00:09:55.625 13409.674 - 13510.498: 99.2243% ( 7) 00:09:55.625 13510.498 - 13611.323: 99.2628% ( 7) 00:09:55.625 13611.323 - 13712.148: 99.2903% ( 5) 00:09:55.625 13712.148 - 13812.972: 99.2958% ( 1) 00:09:55.625 21576.468 - 21677.292: 99.3013% ( 1) 00:09:55.625 22181.415 - 22282.240: 99.3068% ( 1) 00:09:55.625 22282.240 - 22383.065: 99.3123% ( 1) 00:09:55.625 22483.889 - 22584.714: 99.3178% ( 1) 00:09:55.625 22584.714 - 22685.538: 99.3728% ( 10) 00:09:55.625 22685.538 - 22786.363: 99.5048% ( 24) 00:09:55.625 22786.363 - 22887.188: 99.6314% ( 23) 00:09:55.625 22887.188 - 22988.012: 99.7029% ( 13) 00:09:55.625 22988.012 - 23088.837: 99.7524% ( 9) 00:09:55.625 23088.837 - 23189.662: 99.8184% ( 12) 00:09:55.625 23189.662 - 23290.486: 99.8570% ( 7) 00:09:55.625 23290.486 - 23391.311: 99.8845% ( 5) 00:09:55.625 23391.311 - 23492.135: 99.9175% ( 6) 00:09:55.625 23492.135 - 23592.960: 99.9285% ( 2) 00:09:55.625 23592.960 - 23693.785: 99.9450% ( 3) 00:09:55.625 23693.785 - 23794.609: 99.9615% ( 3) 00:09:55.625 23794.609 - 23895.434: 99.9725% ( 2) 00:09:55.625 23895.434 - 23996.258: 99.9890% ( 3) 00:09:55.625 23996.258 - 24097.083: 100.0000% ( 2) 00:09:55.625 00:09:55.625 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:55.625 ============================================================================== 00:09:55.625 Range in us Cumulative IO count 00:09:55.625 5142.055 - 5167.262: 0.0110% ( 2) 00:09:55.625 5167.262 - 5192.468: 0.0165% ( 1) 00:09:55.625 5192.468 - 5217.674: 0.0220% ( 1) 00:09:55.625 5242.880 - 5268.086: 0.0330% ( 2) 00:09:55.625 5268.086 - 5293.292: 0.0385% ( 1) 00:09:55.625 5293.292 - 5318.498: 0.0550% ( 3) 00:09:55.625 5318.498 - 5343.705: 0.0660% ( 2) 00:09:55.625 5343.705 - 5368.911: 0.0770% ( 2) 00:09:55.625 5368.911 - 5394.117: 0.1155% ( 7) 00:09:55.625 5394.117 - 5419.323: 0.1375% ( 4) 00:09:55.625 5419.323 - 5444.529: 0.1926% ( 10) 00:09:55.625 5444.529 - 5469.735: 0.2531% ( 11) 00:09:55.625 5469.735 - 5494.942: 0.3081% ( 10) 00:09:55.625 5494.942 - 5520.148: 0.3961% ( 16) 00:09:55.625 5520.148 - 5545.354: 0.5062% ( 20) 00:09:55.625 5545.354 - 5570.560: 0.5777% ( 13) 00:09:55.625 5570.560 - 5595.766: 0.6657% ( 16) 00:09:55.625 5595.766 - 5620.972: 0.8088% ( 26) 00:09:55.625 5620.972 - 5646.178: 0.9793% ( 31) 00:09:55.625 5646.178 - 5671.385: 1.2324% ( 46) 00:09:55.626 5671.385 - 5696.591: 1.5350% ( 55) 00:09:55.626 5696.591 - 5721.797: 1.8211% ( 52) 00:09:55.626 5721.797 - 5747.003: 2.1292% ( 56) 00:09:55.626 5747.003 - 5772.209: 2.5308% ( 73) 00:09:55.626 5772.209 - 5797.415: 2.9544% ( 77) 00:09:55.626 5797.415 - 5822.622: 3.4771% ( 95) 00:09:55.626 5822.622 - 5847.828: 4.1703% ( 126) 00:09:55.626 5847.828 - 5873.034: 4.9186% ( 136) 00:09:55.626 5873.034 - 5898.240: 5.7438% ( 150) 00:09:55.626 5898.240 - 5923.446: 6.7232% ( 178) 00:09:55.626 5923.446 - 5948.652: 7.9170% ( 217) 00:09:55.626 5948.652 - 5973.858: 9.6336% ( 312) 00:09:55.626 5973.858 - 5999.065: 10.8715% ( 225) 00:09:55.626 5999.065 - 6024.271: 12.2524% ( 251) 00:09:55.626 6024.271 - 6049.477: 13.6609% ( 256) 00:09:55.626 6049.477 - 6074.683: 15.0693% ( 256) 00:09:55.626 6074.683 - 6099.889: 16.8684% ( 327) 00:09:55.626 6099.889 - 6125.095: 18.7060% ( 334) 00:09:55.626 6125.095 - 6150.302: 20.6041% ( 345) 00:09:55.626 6150.302 - 6175.508: 22.2546% ( 300) 00:09:55.626 6175.508 - 6200.714: 24.2793% ( 368) 00:09:55.626 6200.714 - 6225.920: 26.9751% ( 490) 00:09:55.626 6225.920 - 6251.126: 29.3079% ( 424) 00:09:55.626 6251.126 - 6276.332: 31.6406% ( 424) 00:09:55.626 6276.332 - 6301.538: 33.7643% ( 386) 00:09:55.626 6301.538 - 6326.745: 36.5537% ( 507) 00:09:55.626 6326.745 - 6351.951: 39.8823% ( 605) 00:09:55.626 6351.951 - 6377.157: 42.2425% ( 429) 00:09:55.626 6377.157 - 6402.363: 44.3882% ( 390) 00:09:55.626 6402.363 - 6427.569: 47.4307% ( 553) 00:09:55.626 6427.569 - 6452.775: 50.2916% ( 520) 00:09:55.626 6452.775 - 6503.188: 54.3574% ( 739) 00:09:55.626 6503.188 - 6553.600: 57.9445% ( 652) 00:09:55.626 6553.600 - 6604.012: 60.1893% ( 408) 00:09:55.626 6604.012 - 6654.425: 62.4945% ( 419) 00:09:55.626 6654.425 - 6704.837: 65.1574% ( 484) 00:09:55.626 6704.837 - 6755.249: 67.5341% ( 432) 00:09:55.626 6755.249 - 6805.662: 69.9494% ( 439) 00:09:55.626 6805.662 - 6856.074: 72.2436% ( 417) 00:09:55.626 6856.074 - 6906.486: 74.1747% ( 351) 00:09:55.626 6906.486 - 6956.898: 76.0178% ( 335) 00:09:55.626 6956.898 - 7007.311: 77.2227% ( 219) 00:09:55.626 7007.311 - 7057.723: 78.3506% ( 205) 00:09:55.626 7057.723 - 7108.135: 79.1318% ( 142) 00:09:55.626 7108.135 - 7158.548: 79.6930% ( 102) 00:09:55.626 7158.548 - 7208.960: 80.1772% ( 88) 00:09:55.626 7208.960 - 7259.372: 80.5898% ( 75) 00:09:55.626 7259.372 - 7309.785: 80.9804% ( 71) 00:09:55.626 7309.785 - 7360.197: 81.3325% ( 64) 00:09:55.626 7360.197 - 7410.609: 81.6681% ( 61) 00:09:55.626 7410.609 - 7461.022: 81.9322% ( 48) 00:09:55.626 7461.022 - 7511.434: 82.1578% ( 41) 00:09:55.626 7511.434 - 7561.846: 82.3283% ( 31) 00:09:55.626 7561.846 - 7612.258: 82.5154% ( 34) 00:09:55.626 7612.258 - 7662.671: 82.6860% ( 31) 00:09:55.626 7662.671 - 7713.083: 82.8730% ( 34) 00:09:55.626 7713.083 - 7763.495: 83.3957% ( 95) 00:09:55.626 7763.495 - 7813.908: 83.8358% ( 80) 00:09:55.626 7813.908 - 7864.320: 84.1219% ( 52) 00:09:55.626 7864.320 - 7914.732: 84.3750% ( 46) 00:09:55.626 7914.732 - 7965.145: 84.7106% ( 61) 00:09:55.626 7965.145 - 8015.557: 85.1507% ( 80) 00:09:55.626 8015.557 - 8065.969: 85.5799% ( 78) 00:09:55.626 8065.969 - 8116.382: 85.8990% ( 58) 00:09:55.626 8116.382 - 8166.794: 86.2621% ( 66) 00:09:55.626 8166.794 - 8217.206: 86.7738% ( 93) 00:09:55.626 8217.206 - 8267.618: 87.2359% ( 84) 00:09:55.626 8267.618 - 8318.031: 87.7806% ( 99) 00:09:55.626 8318.031 - 8368.443: 88.1327% ( 64) 00:09:55.626 8368.443 - 8418.855: 88.4133% ( 51) 00:09:55.626 8418.855 - 8469.268: 88.7709% ( 65) 00:09:55.626 8469.268 - 8519.680: 89.0845% ( 57) 00:09:55.626 8519.680 - 8570.092: 89.3761% ( 53) 00:09:55.626 8570.092 - 8620.505: 89.5742% ( 36) 00:09:55.626 8620.505 - 8670.917: 89.7667% ( 35) 00:09:55.626 8670.917 - 8721.329: 90.1794% ( 75) 00:09:55.626 8721.329 - 8771.742: 90.5370% ( 65) 00:09:55.626 8771.742 - 8822.154: 90.7680% ( 42) 00:09:55.626 8822.154 - 8872.566: 90.9936% ( 41) 00:09:55.626 8872.566 - 8922.978: 91.2357% ( 44) 00:09:55.626 8922.978 - 8973.391: 91.4778% ( 44) 00:09:55.626 8973.391 - 9023.803: 91.7419% ( 48) 00:09:55.626 9023.803 - 9074.215: 92.0610% ( 58) 00:09:55.626 9074.215 - 9124.628: 92.3856% ( 59) 00:09:55.626 9124.628 - 9175.040: 92.5011% ( 21) 00:09:55.626 9175.040 - 9225.452: 92.5891% ( 16) 00:09:55.626 9225.452 - 9275.865: 92.6717% ( 15) 00:09:55.626 9275.865 - 9326.277: 92.7322% ( 11) 00:09:55.626 9326.277 - 9376.689: 92.7982% ( 12) 00:09:55.626 9376.689 - 9427.102: 92.8422% ( 8) 00:09:55.626 9427.102 - 9477.514: 92.8862% ( 8) 00:09:55.626 9477.514 - 9527.926: 92.9247% ( 7) 00:09:55.626 9527.926 - 9578.338: 92.9577% ( 6) 00:09:55.626 9578.338 - 9628.751: 92.9853% ( 5) 00:09:55.626 9628.751 - 9679.163: 93.0183% ( 6) 00:09:55.626 9679.163 - 9729.575: 93.0513% ( 6) 00:09:55.626 9729.575 - 9779.988: 93.0898% ( 7) 00:09:55.626 9779.988 - 9830.400: 93.1338% ( 8) 00:09:55.626 9830.400 - 9880.812: 93.1998% ( 12) 00:09:55.626 9880.812 - 9931.225: 93.2438% ( 8) 00:09:55.626 9931.225 - 9981.637: 93.2934% ( 9) 00:09:55.626 9981.637 - 10032.049: 93.3319% ( 7) 00:09:55.626 10032.049 - 10082.462: 93.3869% ( 10) 00:09:55.626 10082.462 - 10132.874: 93.4749% ( 16) 00:09:55.626 10132.874 - 10183.286: 93.5354% ( 11) 00:09:55.626 10183.286 - 10233.698: 93.6950% ( 29) 00:09:55.626 10233.698 - 10284.111: 93.7610% ( 12) 00:09:55.626 10284.111 - 10334.523: 93.8435% ( 15) 00:09:55.626 10334.523 - 10384.935: 93.9261% ( 15) 00:09:55.626 10384.935 - 10435.348: 94.0361% ( 20) 00:09:55.626 10435.348 - 10485.760: 94.1626% ( 23) 00:09:55.626 10485.760 - 10536.172: 94.3387% ( 32) 00:09:55.626 10536.172 - 10586.585: 94.5643% ( 41) 00:09:55.626 10586.585 - 10636.997: 94.7733% ( 38) 00:09:55.626 10636.997 - 10687.409: 94.9384% ( 30) 00:09:55.626 10687.409 - 10737.822: 95.0759% ( 25) 00:09:55.626 10737.822 - 10788.234: 95.2465% ( 31) 00:09:55.626 10788.234 - 10838.646: 95.4005% ( 28) 00:09:55.626 10838.646 - 10889.058: 95.7141% ( 57) 00:09:55.626 10889.058 - 10939.471: 95.9507% ( 43) 00:09:55.626 10939.471 - 10989.883: 96.1103% ( 29) 00:09:55.626 10989.883 - 11040.295: 96.2258% ( 21) 00:09:55.626 11040.295 - 11090.708: 96.3633% ( 25) 00:09:55.626 11090.708 - 11141.120: 96.4899% ( 23) 00:09:55.626 11141.120 - 11191.532: 96.6604% ( 31) 00:09:55.626 11191.532 - 11241.945: 96.8365% ( 32) 00:09:55.626 11241.945 - 11292.357: 97.0125% ( 32) 00:09:55.626 11292.357 - 11342.769: 97.1721% ( 29) 00:09:55.626 11342.769 - 11393.182: 97.2931% ( 22) 00:09:55.626 11393.182 - 11443.594: 97.4252% ( 24) 00:09:55.626 11443.594 - 11494.006: 97.5297% ( 19) 00:09:55.626 11494.006 - 11544.418: 97.6562% ( 23) 00:09:55.626 11544.418 - 11594.831: 97.9368% ( 51) 00:09:55.626 11594.831 - 11645.243: 98.1844% ( 45) 00:09:55.626 11645.243 - 11695.655: 98.3770% ( 35) 00:09:55.626 11695.655 - 11746.068: 98.5640% ( 34) 00:09:55.626 11746.068 - 11796.480: 98.6521% ( 16) 00:09:55.626 11796.480 - 11846.892: 98.7456% ( 17) 00:09:55.626 11846.892 - 11897.305: 98.8281% ( 15) 00:09:55.626 11897.305 - 11947.717: 98.9107% ( 15) 00:09:55.626 11947.717 - 11998.129: 98.9657% ( 10) 00:09:55.626 11998.129 - 12048.542: 99.0042% ( 7) 00:09:55.626 12048.542 - 12098.954: 99.0427% ( 7) 00:09:55.626 12098.954 - 12149.366: 99.0757% ( 6) 00:09:55.626 12149.366 - 12199.778: 99.1087% ( 6) 00:09:55.626 12199.778 - 12250.191: 99.1307% ( 4) 00:09:55.626 12250.191 - 12300.603: 99.1527% ( 4) 00:09:55.626 12300.603 - 12351.015: 99.1692% ( 3) 00:09:55.626 12351.015 - 12401.428: 99.1802% ( 2) 00:09:55.626 12401.428 - 12451.840: 99.1912% ( 2) 00:09:55.626 12451.840 - 12502.252: 99.2022% ( 2) 00:09:55.627 12502.252 - 12552.665: 99.2132% ( 2) 00:09:55.627 12552.665 - 12603.077: 99.2243% ( 2) 00:09:55.627 12603.077 - 12653.489: 99.2408% ( 3) 00:09:55.627 12653.489 - 12703.902: 99.2518% ( 2) 00:09:55.627 12703.902 - 12754.314: 99.2628% ( 2) 00:09:55.627 12754.314 - 12804.726: 99.2738% ( 2) 00:09:55.627 12804.726 - 12855.138: 99.2793% ( 1) 00:09:55.627 12855.138 - 12905.551: 99.2903% ( 2) 00:09:55.627 12905.551 - 13006.375: 99.2958% ( 1) 00:09:55.627 21979.766 - 22080.591: 99.3783% ( 15) 00:09:55.627 22080.591 - 22181.415: 99.4553% ( 14) 00:09:55.627 22181.415 - 22282.240: 99.5489% ( 17) 00:09:55.627 22282.240 - 22383.065: 99.6479% ( 18) 00:09:55.627 22383.065 - 22483.889: 99.7469% ( 18) 00:09:55.627 22483.889 - 22584.714: 99.9450% ( 36) 00:09:55.627 22584.714 - 22685.538: 99.9670% ( 4) 00:09:55.627 22685.538 - 22786.363: 99.9835% ( 3) 00:09:55.627 22786.363 - 22887.188: 100.0000% ( 3) 00:09:55.627 00:09:55.627 07:25:04 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:55.627 00:09:55.627 real 0m2.597s 00:09:55.627 user 0m2.293s 00:09:55.627 sys 0m0.198s 00:09:55.627 07:25:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.627 ************************************ 00:09:55.627 END TEST nvme_perf 00:09:55.627 ************************************ 00:09:55.627 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.627 07:25:04 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:55.627 07:25:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:55.627 07:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.627 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.627 ************************************ 00:09:55.627 START TEST nvme_hello_world 00:09:55.627 ************************************ 00:09:55.627 07:25:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:55.886 Initializing NVMe Controllers 00:09:55.886 Attached to 0000:00:06.0 00:09:55.886 Namespace ID: 1 size: 6GB 00:09:55.886 Attached to 0000:00:07.0 00:09:55.886 Namespace ID: 1 size: 5GB 00:09:55.886 Attached to 0000:00:09.0 00:09:55.886 Namespace ID: 1 size: 1GB 00:09:55.886 Attached to 0000:00:08.0 00:09:55.886 Namespace ID: 1 size: 4GB 00:09:55.886 Namespace ID: 2 size: 4GB 00:09:55.886 Namespace ID: 3 size: 4GB 00:09:55.886 Initialization complete. 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 INFO: using host memory buffer for IO 00:09:55.886 Hello world! 00:09:55.886 00:09:55.886 real 0m0.260s 00:09:55.886 user 0m0.123s 00:09:55.886 sys 0m0.087s 00:09:55.886 07:25:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.886 ************************************ 00:09:55.886 END TEST nvme_hello_world 00:09:55.886 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.886 ************************************ 00:09:55.886 07:25:04 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:55.886 07:25:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.886 07:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.886 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.886 ************************************ 00:09:55.886 START TEST nvme_sgl 00:09:55.886 ************************************ 00:09:55.886 07:25:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:56.145 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:56.145 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:56.145 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:56.145 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:56.145 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:56.145 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:56.145 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:56.145 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:56.145 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:56.145 NVMe Readv/Writev Request test 00:09:56.145 Attached to 0000:00:06.0 00:09:56.145 Attached to 0000:00:07.0 00:09:56.145 Attached to 0000:00:09.0 00:09:56.145 Attached to 0000:00:08.0 00:09:56.145 0000:00:06.0: build_io_request_2 test passed 00:09:56.145 0000:00:06.0: build_io_request_4 test passed 00:09:56.145 0000:00:06.0: build_io_request_5 test passed 00:09:56.145 0000:00:06.0: build_io_request_6 test passed 00:09:56.145 0000:00:06.0: build_io_request_7 test passed 00:09:56.145 0000:00:06.0: build_io_request_10 test passed 00:09:56.145 0000:00:07.0: build_io_request_2 test passed 00:09:56.145 0000:00:07.0: build_io_request_4 test passed 00:09:56.145 0000:00:07.0: build_io_request_5 test passed 00:09:56.145 0000:00:07.0: build_io_request_6 test passed 00:09:56.145 0000:00:07.0: build_io_request_7 test passed 00:09:56.145 0000:00:07.0: build_io_request_10 test passed 00:09:56.145 Cleaning up... 00:09:56.145 00:09:56.145 real 0m0.373s 00:09:56.145 user 0m0.238s 00:09:56.145 sys 0m0.085s 00:09:56.145 07:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.145 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.145 ************************************ 00:09:56.145 END TEST nvme_sgl 00:09:56.145 ************************************ 00:09:56.145 07:25:05 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:56.145 07:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.145 07:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.145 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.145 ************************************ 00:09:56.145 START TEST nvme_e2edp 00:09:56.145 ************************************ 00:09:56.146 07:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:56.404 NVMe Write/Read with End-to-End data protection test 00:09:56.404 Attached to 0000:00:06.0 00:09:56.404 Attached to 0000:00:07.0 00:09:56.404 Attached to 0000:00:09.0 00:09:56.404 Attached to 0000:00:08.0 00:09:56.404 Cleaning up... 00:09:56.404 00:09:56.404 real 0m0.192s 00:09:56.404 user 0m0.063s 00:09:56.404 sys 0m0.079s 00:09:56.404 07:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.404 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.404 ************************************ 00:09:56.404 END TEST nvme_e2edp 00:09:56.404 ************************************ 00:09:56.404 07:25:05 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:56.404 07:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.404 07:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.404 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.404 ************************************ 00:09:56.404 START TEST nvme_reserve 00:09:56.404 ************************************ 00:09:56.404 07:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:56.663 ===================================================== 00:09:56.663 NVMe Controller at PCI bus 0, device 6, function 0 00:09:56.663 ===================================================== 00:09:56.663 Reservations: Not Supported 00:09:56.664 ===================================================== 00:09:56.664 NVMe Controller at PCI bus 0, device 7, function 0 00:09:56.664 ===================================================== 00:09:56.664 Reservations: Not Supported 00:09:56.664 ===================================================== 00:09:56.664 NVMe Controller at PCI bus 0, device 9, function 0 00:09:56.664 ===================================================== 00:09:56.664 Reservations: Not Supported 00:09:56.664 ===================================================== 00:09:56.664 NVMe Controller at PCI bus 0, device 8, function 0 00:09:56.664 ===================================================== 00:09:56.664 Reservations: Not Supported 00:09:56.664 Reservation test passed 00:09:56.664 00:09:56.664 real 0m0.195s 00:09:56.664 user 0m0.059s 00:09:56.664 sys 0m0.089s 00:09:56.664 07:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.664 ************************************ 00:09:56.664 END TEST nvme_reserve 00:09:56.664 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.664 ************************************ 00:09:56.664 07:25:05 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:56.664 07:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.664 07:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.664 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.664 ************************************ 00:09:56.664 START TEST nvme_err_injection 00:09:56.664 ************************************ 00:09:56.664 07:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:56.922 NVMe Error Injection test 00:09:56.922 Attached to 0000:00:06.0 00:09:56.922 Attached to 0000:00:07.0 00:09:56.922 Attached to 0000:00:09.0 00:09:56.922 Attached to 0000:00:08.0 00:09:56.922 0000:00:06.0: get features failed as expected 00:09:56.922 0000:00:07.0: get features failed as expected 00:09:56.922 0000:00:09.0: get features failed as expected 00:09:56.922 0000:00:08.0: get features failed as expected 00:09:56.922 0000:00:06.0: get features successfully as expected 00:09:56.922 0000:00:07.0: get features successfully as expected 00:09:56.922 0000:00:09.0: get features successfully as expected 00:09:56.922 0000:00:08.0: get features successfully as expected 00:09:56.922 0000:00:06.0: read failed as expected 00:09:56.922 0000:00:07.0: read failed as expected 00:09:56.922 0000:00:09.0: read failed as expected 00:09:56.922 0000:00:08.0: read failed as expected 00:09:56.922 0000:00:06.0: read successfully as expected 00:09:56.922 0000:00:07.0: read successfully as expected 00:09:56.922 0000:00:09.0: read successfully as expected 00:09:56.922 0000:00:08.0: read successfully as expected 00:09:56.922 Cleaning up... 00:09:56.922 00:09:56.922 real 0m0.243s 00:09:56.922 user 0m0.099s 00:09:56.922 sys 0m0.098s 00:09:56.922 07:25:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.922 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:56.922 ************************************ 00:09:56.922 END TEST nvme_err_injection 00:09:56.922 ************************************ 00:09:56.922 07:25:06 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:56.922 07:25:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:56.922 07:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.922 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:56.922 ************************************ 00:09:56.922 START TEST nvme_overhead 00:09:56.922 ************************************ 00:09:56.922 07:25:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:58.332 Initializing NVMe Controllers 00:09:58.332 Attached to 0000:00:06.0 00:09:58.332 Attached to 0000:00:07.0 00:09:58.332 Attached to 0000:00:09.0 00:09:58.332 Attached to 0000:00:08.0 00:09:58.332 Initialization complete. Launching workers. 00:09:58.332 submit (in ns) avg, min, max = 11469.6, 10156.9, 272735.4 00:09:58.332 complete (in ns) avg, min, max = 7537.5, 7180.8, 321533.1 00:09:58.332 00:09:58.332 Submit histogram 00:09:58.332 ================ 00:09:58.332 Range in us Cumulative Count 00:09:58.332 10.142 - 10.191: 0.0056% ( 1) 00:09:58.332 10.683 - 10.732: 0.0113% ( 1) 00:09:58.332 10.831 - 10.880: 0.0169% ( 1) 00:09:58.332 10.880 - 10.929: 0.0620% ( 8) 00:09:58.332 10.929 - 10.978: 0.4513% ( 69) 00:09:58.332 10.978 - 11.028: 2.2450% ( 318) 00:09:58.332 11.028 - 11.077: 6.8310% ( 813) 00:09:58.332 11.077 - 11.126: 15.9014% ( 1608) 00:09:58.332 11.126 - 11.175: 29.1573% ( 2350) 00:09:58.332 11.175 - 11.225: 44.1449% ( 2657) 00:09:58.332 11.225 - 11.274: 57.8633% ( 2432) 00:09:58.332 11.274 - 11.323: 68.7556% ( 1931) 00:09:58.332 11.323 - 11.372: 75.6769% ( 1227) 00:09:58.332 11.372 - 11.422: 79.9977% ( 766) 00:09:58.332 11.422 - 11.471: 82.7166% ( 482) 00:09:58.332 11.471 - 11.520: 84.4991% ( 316) 00:09:58.332 11.520 - 11.569: 85.6047% ( 196) 00:09:58.332 11.569 - 11.618: 86.6200% ( 180) 00:09:58.332 11.618 - 11.668: 87.5677% ( 168) 00:09:58.332 11.668 - 11.717: 88.5435% ( 173) 00:09:58.332 11.717 - 11.766: 89.7281% ( 210) 00:09:58.332 11.766 - 11.815: 90.7491% ( 181) 00:09:58.332 11.815 - 11.865: 91.7532% ( 178) 00:09:58.332 11.865 - 11.914: 92.8418% ( 193) 00:09:58.332 11.914 - 11.963: 93.5074% ( 118) 00:09:58.332 11.963 - 12.012: 94.2746% ( 136) 00:09:58.332 12.012 - 12.062: 94.9289% ( 116) 00:09:58.332 12.062 - 12.111: 95.4930% ( 100) 00:09:58.332 12.111 - 12.160: 95.9443% ( 80) 00:09:58.332 12.160 - 12.209: 96.2489% ( 54) 00:09:58.332 12.209 - 12.258: 96.5140% ( 47) 00:09:58.332 12.258 - 12.308: 96.6663% ( 27) 00:09:58.332 12.308 - 12.357: 96.8130% ( 26) 00:09:58.332 12.357 - 12.406: 96.8976% ( 15) 00:09:58.332 12.406 - 12.455: 96.9596% ( 11) 00:09:58.332 12.455 - 12.505: 97.0273% ( 12) 00:09:58.332 12.505 - 12.554: 97.0611% ( 6) 00:09:58.332 12.554 - 12.603: 97.0950% ( 6) 00:09:58.332 12.603 - 12.702: 97.1796% ( 15) 00:09:58.332 12.702 - 12.800: 97.2022% ( 4) 00:09:58.332 12.800 - 12.898: 97.2417% ( 7) 00:09:58.332 12.898 - 12.997: 97.2868% ( 8) 00:09:58.332 12.997 - 13.095: 97.3206% ( 6) 00:09:58.332 13.095 - 13.194: 97.3996% ( 14) 00:09:58.332 13.194 - 13.292: 97.4616% ( 11) 00:09:58.332 13.292 - 13.391: 97.6083% ( 26) 00:09:58.332 13.391 - 13.489: 97.7380% ( 23) 00:09:58.332 13.489 - 13.588: 97.8283% ( 16) 00:09:58.332 13.588 - 13.686: 97.9129% ( 15) 00:09:58.332 13.686 - 13.785: 97.9580% ( 8) 00:09:58.332 13.785 - 13.883: 98.0314% ( 13) 00:09:58.332 13.883 - 13.982: 98.0483% ( 3) 00:09:58.332 13.982 - 14.080: 98.0934% ( 8) 00:09:58.332 14.080 - 14.178: 98.1216% ( 5) 00:09:58.332 14.178 - 14.277: 98.1498% ( 5) 00:09:58.332 14.375 - 14.474: 98.1611% ( 2) 00:09:58.332 14.474 - 14.572: 98.1724% ( 2) 00:09:58.332 14.572 - 14.671: 98.1837% ( 2) 00:09:58.332 14.671 - 14.769: 98.2062% ( 4) 00:09:58.332 14.769 - 14.868: 98.2288% ( 4) 00:09:58.332 14.868 - 14.966: 98.2457% ( 3) 00:09:58.332 14.966 - 15.065: 98.2796% ( 6) 00:09:58.332 15.065 - 15.163: 98.3021% ( 4) 00:09:58.332 15.163 - 15.262: 98.3360% ( 6) 00:09:58.332 15.262 - 15.360: 98.3529% ( 3) 00:09:58.332 15.360 - 15.458: 98.3924% ( 7) 00:09:58.332 15.458 - 15.557: 98.4037% ( 2) 00:09:58.332 15.557 - 15.655: 98.4206% ( 3) 00:09:58.332 15.655 - 15.754: 98.4657% ( 8) 00:09:58.332 15.754 - 15.852: 98.4770% ( 2) 00:09:58.332 15.852 - 15.951: 98.5052% ( 5) 00:09:58.332 15.951 - 16.049: 98.5278% ( 4) 00:09:58.332 16.049 - 16.148: 98.5390% ( 2) 00:09:58.332 16.148 - 16.246: 98.5447% ( 1) 00:09:58.332 16.345 - 16.443: 98.5560% ( 2) 00:09:58.332 16.443 - 16.542: 98.5785% ( 4) 00:09:58.332 16.542 - 16.640: 98.6011% ( 4) 00:09:58.332 16.640 - 16.738: 98.6462% ( 8) 00:09:58.332 16.738 - 16.837: 98.7026% ( 10) 00:09:58.333 16.837 - 16.935: 98.8324% ( 23) 00:09:58.333 16.935 - 17.034: 98.9226% ( 16) 00:09:58.333 17.034 - 17.132: 98.9959% ( 13) 00:09:58.333 17.132 - 17.231: 99.0693% ( 13) 00:09:58.333 17.231 - 17.329: 99.1144% ( 8) 00:09:58.333 17.329 - 17.428: 99.1652% ( 9) 00:09:58.333 17.428 - 17.526: 99.2272% ( 11) 00:09:58.333 17.526 - 17.625: 99.3062% ( 14) 00:09:58.333 17.625 - 17.723: 99.3795% ( 13) 00:09:58.333 17.723 - 17.822: 99.4303% ( 9) 00:09:58.333 17.822 - 17.920: 99.4867% ( 10) 00:09:58.333 17.920 - 18.018: 99.5149% ( 5) 00:09:58.333 18.018 - 18.117: 99.5318% ( 3) 00:09:58.333 18.117 - 18.215: 99.5544% ( 4) 00:09:58.333 18.215 - 18.314: 99.5713% ( 3) 00:09:58.333 18.314 - 18.412: 99.5939% ( 4) 00:09:58.333 18.412 - 18.511: 99.6164% ( 4) 00:09:58.333 18.511 - 18.609: 99.6333% ( 3) 00:09:58.333 18.609 - 18.708: 99.6503% ( 3) 00:09:58.333 18.708 - 18.806: 99.6785% ( 5) 00:09:58.333 18.806 - 18.905: 99.6898% ( 2) 00:09:58.333 18.905 - 19.003: 99.6954% ( 1) 00:09:58.333 19.003 - 19.102: 99.7067% ( 2) 00:09:58.333 19.102 - 19.200: 99.7236% ( 3) 00:09:58.333 19.397 - 19.495: 99.7292% ( 1) 00:09:58.333 19.495 - 19.594: 99.7462% ( 3) 00:09:58.333 19.594 - 19.692: 99.7518% ( 1) 00:09:58.333 19.692 - 19.791: 99.7631% ( 2) 00:09:58.333 19.791 - 19.889: 99.7744% ( 2) 00:09:58.333 19.889 - 19.988: 99.7800% ( 1) 00:09:58.333 19.988 - 20.086: 99.7913% ( 2) 00:09:58.333 20.086 - 20.185: 99.8082% ( 3) 00:09:58.333 20.185 - 20.283: 99.8195% ( 2) 00:09:58.333 20.382 - 20.480: 99.8308% ( 2) 00:09:58.333 20.480 - 20.578: 99.8364% ( 1) 00:09:58.333 20.578 - 20.677: 99.8421% ( 1) 00:09:58.333 20.775 - 20.874: 99.8533% ( 2) 00:09:58.333 21.169 - 21.268: 99.8646% ( 2) 00:09:58.333 21.268 - 21.366: 99.8703% ( 1) 00:09:58.333 21.366 - 21.465: 99.8759% ( 1) 00:09:58.333 21.465 - 21.563: 99.8815% ( 1) 00:09:58.333 21.563 - 21.662: 99.8872% ( 1) 00:09:58.333 21.858 - 21.957: 99.8928% ( 1) 00:09:58.333 22.843 - 22.942: 99.8985% ( 1) 00:09:58.333 23.532 - 23.631: 99.9041% ( 1) 00:09:58.333 23.828 - 23.926: 99.9097% ( 1) 00:09:58.333 24.025 - 24.123: 99.9154% ( 1) 00:09:58.333 24.123 - 24.222: 99.9210% ( 1) 00:09:58.333 24.714 - 24.812: 99.9267% ( 1) 00:09:58.333 24.911 - 25.009: 99.9323% ( 1) 00:09:58.333 25.108 - 25.206: 99.9380% ( 1) 00:09:58.333 26.585 - 26.782: 99.9436% ( 1) 00:09:58.333 27.175 - 27.372: 99.9492% ( 1) 00:09:58.333 30.917 - 31.114: 99.9549% ( 1) 00:09:58.333 33.674 - 33.871: 99.9605% ( 1) 00:09:58.333 34.068 - 34.265: 99.9662% ( 1) 00:09:58.333 38.203 - 38.400: 99.9718% ( 1) 00:09:58.333 41.551 - 41.748: 99.9774% ( 1) 00:09:58.333 51.594 - 51.988: 99.9831% ( 1) 00:09:58.333 54.745 - 55.138: 99.9887% ( 1) 00:09:58.333 90.978 - 91.372: 99.9944% ( 1) 00:09:58.333 272.542 - 274.117: 100.0000% ( 1) 00:09:58.333 00:09:58.333 Complete histogram 00:09:58.333 ================== 00:09:58.333 Range in us Cumulative Count 00:09:58.333 7.138 - 7.188: 0.0169% ( 3) 00:09:58.333 7.188 - 7.237: 0.6487% ( 112) 00:09:58.333 7.237 - 7.286: 5.5731% ( 873) 00:09:58.333 7.286 - 7.335: 20.2448% ( 2601) 00:09:58.333 7.335 - 7.385: 42.5598% ( 3956) 00:09:58.333 7.385 - 7.434: 65.4614% ( 4060) 00:09:58.333 7.434 - 7.483: 80.8946% ( 2736) 00:09:58.333 7.483 - 7.532: 89.3107% ( 1492) 00:09:58.333 7.532 - 7.582: 93.6597% ( 771) 00:09:58.333 7.582 - 7.631: 95.5945% ( 343) 00:09:58.333 7.631 - 7.680: 96.6042% ( 179) 00:09:58.333 7.680 - 7.729: 97.0555% ( 80) 00:09:58.333 7.729 - 7.778: 97.2868% ( 41) 00:09:58.333 7.778 - 7.828: 97.4898% ( 36) 00:09:58.333 7.828 - 7.877: 97.5857% ( 17) 00:09:58.333 7.877 - 7.926: 97.6704% ( 15) 00:09:58.333 7.926 - 7.975: 97.7211% ( 9) 00:09:58.333 7.975 - 8.025: 97.7832% ( 11) 00:09:58.333 8.025 - 8.074: 97.8114% ( 5) 00:09:58.333 8.074 - 8.123: 97.8509% ( 7) 00:09:58.333 8.123 - 8.172: 97.9185% ( 12) 00:09:58.333 8.172 - 8.222: 97.9919% ( 13) 00:09:58.333 8.222 - 8.271: 98.0991% ( 19) 00:09:58.333 8.271 - 8.320: 98.1724% ( 13) 00:09:58.333 8.320 - 8.369: 98.2344% ( 11) 00:09:58.333 8.369 - 8.418: 98.2852% ( 9) 00:09:58.333 8.418 - 8.468: 98.3078% ( 4) 00:09:58.333 8.468 - 8.517: 98.3303% ( 4) 00:09:58.333 8.517 - 8.566: 98.3360% ( 1) 00:09:58.333 8.566 - 8.615: 98.3472% ( 2) 00:09:58.333 8.615 - 8.665: 98.3585% ( 2) 00:09:58.333 8.714 - 8.763: 98.3642% ( 1) 00:09:58.333 8.812 - 8.862: 98.3698% ( 1) 00:09:58.333 9.600 - 9.649: 98.3755% ( 1) 00:09:58.333 9.649 - 9.698: 98.3867% ( 2) 00:09:58.333 9.748 - 9.797: 98.3980% ( 2) 00:09:58.333 9.797 - 9.846: 98.4037% ( 1) 00:09:58.333 9.895 - 9.945: 98.4149% ( 2) 00:09:58.333 9.945 - 9.994: 98.4206% ( 1) 00:09:58.333 9.994 - 10.043: 98.4319% ( 2) 00:09:58.333 10.043 - 10.092: 98.4375% ( 1) 00:09:58.333 10.092 - 10.142: 98.4431% ( 1) 00:09:58.333 10.240 - 10.289: 98.4544% ( 2) 00:09:58.333 10.289 - 10.338: 98.4601% ( 1) 00:09:58.333 10.535 - 10.585: 98.4657% ( 1) 00:09:58.333 10.585 - 10.634: 98.4713% ( 1) 00:09:58.333 10.732 - 10.782: 98.4770% ( 1) 00:09:58.333 10.978 - 11.028: 98.4826% ( 1) 00:09:58.333 11.372 - 11.422: 98.4883% ( 1) 00:09:58.333 11.471 - 11.520: 98.4939% ( 1) 00:09:58.333 11.569 - 11.618: 98.5108% ( 3) 00:09:58.333 11.618 - 11.668: 98.5221% ( 2) 00:09:58.333 11.668 - 11.717: 98.5278% ( 1) 00:09:58.333 11.766 - 11.815: 98.5334% ( 1) 00:09:58.333 11.963 - 12.012: 98.5390% ( 1) 00:09:58.333 12.308 - 12.357: 98.5447% ( 1) 00:09:58.333 12.455 - 12.505: 98.5560% ( 2) 00:09:58.333 12.702 - 12.800: 98.5842% ( 5) 00:09:58.333 12.800 - 12.898: 98.6462% ( 11) 00:09:58.333 12.898 - 12.997: 98.7026% ( 10) 00:09:58.333 12.997 - 13.095: 98.7929% ( 16) 00:09:58.333 13.095 - 13.194: 98.8831% ( 16) 00:09:58.333 13.194 - 13.292: 98.9508% ( 12) 00:09:58.333 13.292 - 13.391: 98.9959% ( 8) 00:09:58.333 13.391 - 13.489: 99.0636% ( 12) 00:09:58.333 13.489 - 13.588: 99.1200% ( 10) 00:09:58.333 13.588 - 13.686: 99.1990% ( 14) 00:09:58.333 13.686 - 13.785: 99.2949% ( 17) 00:09:58.333 13.785 - 13.883: 99.3344% ( 7) 00:09:58.333 13.883 - 13.982: 99.4303% ( 17) 00:09:58.333 13.982 - 14.080: 99.4867% ( 10) 00:09:58.333 14.080 - 14.178: 99.5375% ( 9) 00:09:58.333 14.178 - 14.277: 99.5713% ( 6) 00:09:58.333 14.277 - 14.375: 99.6390% ( 12) 00:09:58.333 14.375 - 14.474: 99.6616% ( 4) 00:09:58.333 14.474 - 14.572: 99.6785% ( 3) 00:09:58.333 14.572 - 14.671: 99.7236% ( 8) 00:09:58.333 14.671 - 14.769: 99.7405% ( 3) 00:09:58.333 14.769 - 14.868: 99.7574% ( 3) 00:09:58.333 14.868 - 14.966: 99.7744% ( 3) 00:09:58.333 14.966 - 15.065: 99.7856% ( 2) 00:09:58.333 15.065 - 15.163: 99.7913% ( 1) 00:09:58.333 15.163 - 15.262: 99.7969% ( 1) 00:09:58.333 15.262 - 15.360: 99.8026% ( 1) 00:09:58.333 15.360 - 15.458: 99.8082% ( 1) 00:09:58.333 15.458 - 15.557: 99.8139% ( 1) 00:09:58.333 15.557 - 15.655: 99.8195% ( 1) 00:09:58.333 15.655 - 15.754: 99.8308% ( 2) 00:09:58.333 15.754 - 15.852: 99.8364% ( 1) 00:09:58.333 16.345 - 16.443: 99.8477% ( 2) 00:09:58.333 16.542 - 16.640: 99.8533% ( 1) 00:09:58.333 16.640 - 16.738: 99.8590% ( 1) 00:09:58.333 16.738 - 16.837: 99.8646% ( 1) 00:09:58.333 16.837 - 16.935: 99.8703% ( 1) 00:09:58.333 16.935 - 17.034: 99.8759% ( 1) 00:09:58.333 17.132 - 17.231: 99.8872% ( 2) 00:09:58.333 17.231 - 17.329: 99.8985% ( 2) 00:09:58.334 17.329 - 17.428: 99.9041% ( 1) 00:09:58.334 17.625 - 17.723: 99.9154% ( 2) 00:09:58.334 17.920 - 18.018: 99.9210% ( 1) 00:09:58.334 18.609 - 18.708: 99.9267% ( 1) 00:09:58.334 18.905 - 19.003: 99.9323% ( 1) 00:09:58.334 19.495 - 19.594: 99.9380% ( 1) 00:09:58.334 19.791 - 19.889: 99.9436% ( 1) 00:09:58.334 19.988 - 20.086: 99.9492% ( 1) 00:09:58.334 20.283 - 20.382: 99.9549% ( 1) 00:09:58.334 20.874 - 20.972: 99.9605% ( 1) 00:09:58.334 21.563 - 21.662: 99.9662% ( 1) 00:09:58.334 21.858 - 21.957: 99.9718% ( 1) 00:09:58.334 23.532 - 23.631: 99.9774% ( 1) 00:09:58.334 25.403 - 25.600: 99.9831% ( 1) 00:09:58.334 29.342 - 29.538: 99.9887% ( 1) 00:09:58.334 30.917 - 31.114: 99.9944% ( 1) 00:09:58.334 321.378 - 322.954: 100.0000% ( 1) 00:09:58.334 00:09:58.334 00:09:58.334 real 0m1.215s 00:09:58.334 user 0m1.068s 00:09:58.334 sys 0m0.094s 00:09:58.334 07:25:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.334 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.334 ************************************ 00:09:58.334 END TEST nvme_overhead 00:09:58.334 ************************************ 00:09:58.334 07:25:07 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:58.334 07:25:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:58.334 07:25:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.334 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.334 ************************************ 00:09:58.334 START TEST nvme_arbitration 00:09:58.334 ************************************ 00:09:58.334 07:25:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:01.624 Initializing NVMe Controllers 00:10:01.624 Attached to 0000:00:06.0 00:10:01.624 Attached to 0000:00:07.0 00:10:01.624 Attached to 0000:00:09.0 00:10:01.624 Attached to 0000:00:08.0 00:10:01.624 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:01.624 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:01.624 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:01.624 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:01.624 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:01.624 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:01.625 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:01.625 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:01.625 Initialization complete. Launching workers. 00:10:01.625 Starting thread on core 1 with urgent priority queue 00:10:01.625 Starting thread on core 2 with urgent priority queue 00:10:01.625 Starting thread on core 3 with urgent priority queue 00:10:01.625 Starting thread on core 0 with urgent priority queue 00:10:01.625 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:10:01.625 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:10:01.625 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:10:01.625 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:10:01.625 QEMU NVMe Ctrl (12343 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:10:01.625 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:10:01.625 ======================================================== 00:10:01.625 00:10:01.625 00:10:01.625 real 0m3.327s 00:10:01.625 user 0m9.338s 00:10:01.625 sys 0m0.104s 00:10:01.625 07:25:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.625 07:25:10 -- common/autotest_common.sh@10 -- # set +x 00:10:01.625 ************************************ 00:10:01.625 END TEST nvme_arbitration 00:10:01.625 ************************************ 00:10:01.625 07:25:10 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:01.625 07:25:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:01.625 07:25:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.625 07:25:10 -- common/autotest_common.sh@10 -- # set +x 00:10:01.625 ************************************ 00:10:01.625 START TEST nvme_single_aen 00:10:01.625 ************************************ 00:10:01.625 07:25:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:01.625 [2024-11-19 07:25:10.788400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.625 [2024-11-19 07:25:10.788469] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.920 [2024-11-19 07:25:10.921943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:01.920 [2024-11-19 07:25:10.924378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:01.920 [2024-11-19 07:25:10.926745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:01.920 [2024-11-19 07:25:10.928802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:01.920 Asynchronous Event Request test 00:10:01.920 Attached to 0000:00:06.0 00:10:01.920 Attached to 0000:00:07.0 00:10:01.920 Attached to 0000:00:09.0 00:10:01.921 Attached to 0000:00:08.0 00:10:01.921 Reset controller to setup AER completions for this process 00:10:01.921 Registering asynchronous event callbacks... 00:10:01.921 Getting orig temperature thresholds of all controllers 00:10:01.921 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.921 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.921 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.921 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:01.921 Setting all controllers temperature threshold low to trigger AER 00:10:01.921 Waiting for all controllers temperature threshold to be set lower 00:10:01.921 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.921 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:01.921 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.921 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:01.921 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.921 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:01.921 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:01.921 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:01.921 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.921 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.921 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.921 Waiting for all controllers to trigger AER and reset threshold 00:10:01.921 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.921 Cleaning up... 00:10:01.921 00:10:01.921 real 0m0.214s 00:10:01.921 user 0m0.077s 00:10:01.921 sys 0m0.093s 00:10:01.921 07:25:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.921 07:25:10 -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 ************************************ 00:10:01.921 END TEST nvme_single_aen 00:10:01.921 ************************************ 00:10:01.921 07:25:11 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:01.921 07:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:01.921 07:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.921 07:25:11 -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 ************************************ 00:10:01.921 START TEST nvme_doorbell_aers 00:10:01.921 ************************************ 00:10:01.921 07:25:11 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:10:01.921 07:25:11 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:01.921 07:25:11 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:01.921 07:25:11 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:01.921 07:25:11 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:01.921 07:25:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:01.921 07:25:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:01.921 07:25:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:01.921 07:25:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:01.921 07:25:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:01.921 07:25:11 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:01.921 07:25:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:01.921 07:25:11 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.921 07:25:11 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:02.186 [2024-11-19 07:25:11.280712] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:12.151 Executing: test_write_invalid_db 00:10:12.151 Waiting for AER completion... 00:10:12.151 Failure: test_write_invalid_db 00:10:12.151 00:10:12.151 Executing: test_invalid_db_write_overflow_sq 00:10:12.151 Waiting for AER completion... 00:10:12.151 Failure: test_invalid_db_write_overflow_sq 00:10:12.151 00:10:12.152 Executing: test_invalid_db_write_overflow_cq 00:10:12.152 Waiting for AER completion... 00:10:12.152 Failure: test_invalid_db_write_overflow_cq 00:10:12.152 00:10:12.152 07:25:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:12.152 07:25:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:12.152 [2024-11-19 07:25:21.322757] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:22.147 Executing: test_write_invalid_db 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_write_invalid_db 00:10:22.147 00:10:22.147 Executing: test_invalid_db_write_overflow_sq 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_invalid_db_write_overflow_sq 00:10:22.147 00:10:22.147 Executing: test_invalid_db_write_overflow_cq 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_invalid_db_write_overflow_cq 00:10:22.147 00:10:22.147 07:25:31 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:22.147 07:25:31 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:22.147 [2024-11-19 07:25:31.322733] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:32.116 Executing: test_write_invalid_db 00:10:32.116 Waiting for AER completion... 00:10:32.116 Failure: test_write_invalid_db 00:10:32.116 00:10:32.116 Executing: test_invalid_db_write_overflow_sq 00:10:32.116 Waiting for AER completion... 00:10:32.116 Failure: test_invalid_db_write_overflow_sq 00:10:32.116 00:10:32.116 Executing: test_invalid_db_write_overflow_cq 00:10:32.116 Waiting for AER completion... 00:10:32.116 Failure: test_invalid_db_write_overflow_cq 00:10:32.116 00:10:32.116 07:25:41 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:32.116 07:25:41 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:32.374 [2024-11-19 07:25:41.384881] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 Executing: test_write_invalid_db 00:10:42.404 Waiting for AER completion... 00:10:42.404 Failure: test_write_invalid_db 00:10:42.404 00:10:42.404 Executing: test_invalid_db_write_overflow_sq 00:10:42.404 Waiting for AER completion... 00:10:42.404 Failure: test_invalid_db_write_overflow_sq 00:10:42.404 00:10:42.404 Executing: test_invalid_db_write_overflow_cq 00:10:42.404 Waiting for AER completion... 00:10:42.404 Failure: test_invalid_db_write_overflow_cq 00:10:42.404 00:10:42.404 00:10:42.404 real 0m40.189s 00:10:42.404 user 0m34.031s 00:10:42.404 sys 0m5.774s 00:10:42.404 07:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.404 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.404 ************************************ 00:10:42.404 END TEST nvme_doorbell_aers 00:10:42.404 ************************************ 00:10:42.404 07:25:51 -- nvme/nvme.sh@97 -- # uname 00:10:42.404 07:25:51 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:42.404 07:25:51 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:42.404 07:25:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:42.404 07:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.404 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.404 ************************************ 00:10:42.404 START TEST nvme_multi_aen 00:10:42.404 ************************************ 00:10:42.404 07:25:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:42.404 [2024-11-19 07:25:51.295873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.404 [2024-11-19 07:25:51.296051] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.404 [2024-11-19 07:25:51.436882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:42.404 [2024-11-19 07:25:51.437019] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.437052] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.437062] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.438047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:42.404 [2024-11-19 07:25:51.438063] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.438079] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.438087] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.438996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:42.404 [2024-11-19 07:25:51.439070] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.439123] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.439151] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.440241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:42.404 [2024-11-19 07:25:51.440332] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.440397] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 [2024-11-19 07:25:51.440425] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64050) is not found. Dropping the request. 00:10:42.404 Child process pid: 64561 00:10:42.404 [2024-11-19 07:25:51.444129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.404 [2024-11-19 07:25:51.444192] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.404 [Child] Asynchronous Event Request test 00:10:42.404 [Child] Attached to 0000:00:06.0 00:10:42.404 [Child] Attached to 0000:00:07.0 00:10:42.404 [Child] Attached to 0000:00:09.0 00:10:42.404 [Child] Attached to 0000:00:08.0 00:10:42.404 [Child] Registering asynchronous event callbacks... 00:10:42.404 [Child] Getting orig temperature thresholds of all controllers 00:10:42.404 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.404 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.404 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.404 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.404 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:42.404 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.404 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.404 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.404 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.404 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.404 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.404 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.404 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.404 [Child] Cleaning up... 00:10:42.666 Asynchronous Event Request test 00:10:42.666 Attached to 0000:00:06.0 00:10:42.666 Attached to 0000:00:07.0 00:10:42.666 Attached to 0000:00:09.0 00:10:42.666 Attached to 0000:00:08.0 00:10:42.666 Reset controller to setup AER completions for this process 00:10:42.666 Registering asynchronous event callbacks... 00:10:42.666 Getting orig temperature thresholds of all controllers 00:10:42.666 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.666 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.666 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.666 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:42.666 Setting all controllers temperature threshold low to trigger AER 00:10:42.666 Waiting for all controllers temperature threshold to be set lower 00:10:42.666 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.666 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:42.666 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.666 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:42.666 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.666 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:42.666 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:42.666 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:42.666 Waiting for all controllers to trigger AER and reset threshold 00:10:42.666 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.666 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.666 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.666 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.666 Cleaning up... 00:10:42.666 00:10:42.666 real 0m0.415s 00:10:42.666 user 0m0.119s 00:10:42.666 sys 0m0.172s 00:10:42.666 07:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.666 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 ************************************ 00:10:42.666 END TEST nvme_multi_aen 00:10:42.666 ************************************ 00:10:42.666 07:25:51 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:42.666 07:25:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:42.666 07:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.666 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 ************************************ 00:10:42.666 START TEST nvme_startup 00:10:42.666 ************************************ 00:10:42.666 07:25:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:42.666 Initializing NVMe Controllers 00:10:42.666 Attached to 0000:00:06.0 00:10:42.666 Attached to 0000:00:07.0 00:10:42.666 Attached to 0000:00:09.0 00:10:42.666 Attached to 0000:00:08.0 00:10:42.666 Initialization complete. 00:10:42.666 Time used:116222.844 (us). 00:10:42.666 00:10:42.666 real 0m0.169s 00:10:42.666 user 0m0.049s 00:10:42.666 sys 0m0.090s 00:10:42.666 07:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.666 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 ************************************ 00:10:42.666 END TEST nvme_startup 00:10:42.666 ************************************ 00:10:42.666 07:25:51 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:42.666 07:25:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:42.666 07:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.666 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.666 ************************************ 00:10:42.666 START TEST nvme_multi_secondary 00:10:42.666 ************************************ 00:10:42.666 07:25:51 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:42.666 07:25:51 -- nvme/nvme.sh@52 -- # pid0=64617 00:10:42.666 07:25:51 -- nvme/nvme.sh@54 -- # pid1=64618 00:10:42.666 07:25:51 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:42.666 07:25:51 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:42.928 07:25:51 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:46.235 Initializing NVMe Controllers 00:10:46.235 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:46.235 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:46.235 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:46.235 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:46.235 Initialization complete. Launching workers. 00:10:46.235 ======================================================== 00:10:46.235 Latency(us) 00:10:46.235 Device Information : IOPS MiB/s Average min max 00:10:46.235 PCIE (0000:00:06.0) NSID 1 from core 1: 5710.04 22.30 2800.63 1090.07 8435.93 00:10:46.235 PCIE (0000:00:07.0) NSID 1 from core 1: 5710.04 22.30 2801.63 1070.34 8707.48 00:10:46.235 PCIE (0000:00:09.0) NSID 1 from core 1: 5710.04 22.30 2801.84 1090.98 8990.90 00:10:46.235 PCIE (0000:00:08.0) NSID 1 from core 1: 5710.04 22.30 2802.02 1081.59 9700.98 00:10:46.235 PCIE (0000:00:08.0) NSID 2 from core 1: 5710.04 22.30 2802.22 1068.01 8185.98 00:10:46.235 PCIE (0000:00:08.0) NSID 3 from core 1: 5710.04 22.30 2802.20 1080.31 8420.38 00:10:46.235 ======================================================== 00:10:46.235 Total : 34260.21 133.83 2801.76 1068.01 9700.98 00:10:46.235 00:10:46.235 Initializing NVMe Controllers 00:10:46.235 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:46.235 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:46.235 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:46.235 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:46.235 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:46.235 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:46.235 Initialization complete. Launching workers. 00:10:46.235 ======================================================== 00:10:46.235 Latency(us) 00:10:46.235 Device Information : IOPS MiB/s Average min max 00:10:46.235 PCIE (0000:00:06.0) NSID 1 from core 2: 2336.70 9.13 6845.93 1524.87 15649.49 00:10:46.235 PCIE (0000:00:07.0) NSID 1 from core 2: 2336.70 9.13 6847.28 1550.88 15741.18 00:10:46.235 PCIE (0000:00:09.0) NSID 1 from core 2: 2336.70 9.13 6847.44 1439.05 13729.06 00:10:46.235 PCIE (0000:00:08.0) NSID 1 from core 2: 2336.70 9.13 6846.78 1560.02 17661.84 00:10:46.235 PCIE (0000:00:08.0) NSID 2 from core 2: 2336.70 9.13 6847.31 1517.67 16732.66 00:10:46.235 PCIE (0000:00:08.0) NSID 3 from core 2: 2336.70 9.13 6847.51 1538.85 16162.64 00:10:46.235 ======================================================== 00:10:46.235 Total : 14020.19 54.77 6847.04 1439.05 17661.84 00:10:46.235 00:10:46.235 07:25:55 -- nvme/nvme.sh@56 -- # wait 64617 00:10:48.153 Initializing NVMe Controllers 00:10:48.153 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:48.153 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:48.153 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:48.153 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:48.153 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:48.153 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:48.153 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:48.153 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:48.153 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:48.153 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:48.153 Initialization complete. Launching workers. 00:10:48.153 ======================================================== 00:10:48.153 Latency(us) 00:10:48.153 Device Information : IOPS MiB/s Average min max 00:10:48.153 PCIE (0000:00:06.0) NSID 1 from core 0: 8213.16 32.08 1946.79 755.39 7052.21 00:10:48.153 PCIE (0000:00:07.0) NSID 1 from core 0: 8213.16 32.08 1947.66 773.17 7814.67 00:10:48.153 PCIE (0000:00:09.0) NSID 1 from core 0: 8213.16 32.08 1947.63 766.22 7414.55 00:10:48.153 PCIE (0000:00:08.0) NSID 1 from core 0: 8213.16 32.08 1947.60 769.28 7155.15 00:10:48.153 PCIE (0000:00:08.0) NSID 2 from core 0: 8213.16 32.08 1947.56 748.61 7627.57 00:10:48.153 PCIE (0000:00:08.0) NSID 3 from core 0: 8213.16 32.08 1947.54 739.45 7167.44 00:10:48.153 ======================================================== 00:10:48.153 Total : 49278.99 192.50 1947.46 739.45 7814.67 00:10:48.153 00:10:48.153 07:25:57 -- nvme/nvme.sh@57 -- # wait 64618 00:10:48.153 07:25:57 -- nvme/nvme.sh@61 -- # pid0=64687 00:10:48.153 07:25:57 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:48.153 07:25:57 -- nvme/nvme.sh@63 -- # pid1=64688 00:10:48.153 07:25:57 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:48.154 07:25:57 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:51.457 Initializing NVMe Controllers 00:10:51.457 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:51.457 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:51.457 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:51.457 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:51.457 Initialization complete. Launching workers. 00:10:51.457 ======================================================== 00:10:51.457 Latency(us) 00:10:51.457 Device Information : IOPS MiB/s Average min max 00:10:51.457 PCIE (0000:00:06.0) NSID 1 from core 0: 4806.03 18.77 3327.57 1026.95 11876.48 00:10:51.457 PCIE (0000:00:07.0) NSID 1 from core 0: 4806.03 18.77 3328.78 1020.89 10540.80 00:10:51.457 PCIE (0000:00:09.0) NSID 1 from core 0: 4806.03 18.77 3329.27 1034.29 10053.65 00:10:51.457 PCIE (0000:00:08.0) NSID 1 from core 0: 4806.03 18.77 3329.27 1036.29 11810.78 00:10:51.457 PCIE (0000:00:08.0) NSID 2 from core 0: 4806.03 18.77 3329.54 1046.63 12143.84 00:10:51.457 PCIE (0000:00:08.0) NSID 3 from core 0: 4806.03 18.77 3329.52 1048.48 11820.99 00:10:51.457 ======================================================== 00:10:51.457 Total : 28836.16 112.64 3328.99 1020.89 12143.84 00:10:51.457 00:10:51.457 Initializing NVMe Controllers 00:10:51.457 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:51.457 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:51.457 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:51.457 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:51.457 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:51.457 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:51.457 Initialization complete. Launching workers. 00:10:51.457 ======================================================== 00:10:51.457 Latency(us) 00:10:51.457 Device Information : IOPS MiB/s Average min max 00:10:51.457 PCIE (0000:00:06.0) NSID 1 from core 1: 4698.41 18.35 3403.77 1109.97 11220.35 00:10:51.457 PCIE (0000:00:07.0) NSID 1 from core 1: 4698.41 18.35 3405.05 1103.51 12365.04 00:10:51.457 PCIE (0000:00:09.0) NSID 1 from core 1: 4698.41 18.35 3405.30 1030.03 14002.91 00:10:51.457 PCIE (0000:00:08.0) NSID 1 from core 1: 4698.41 18.35 3405.29 1105.07 14470.72 00:10:51.457 PCIE (0000:00:08.0) NSID 2 from core 1: 4698.41 18.35 3405.39 1131.44 14966.17 00:10:51.457 PCIE (0000:00:08.0) NSID 3 from core 1: 4698.41 18.35 3405.33 1134.47 13358.12 00:10:51.457 ======================================================== 00:10:51.457 Total : 28190.48 110.12 3405.02 1030.03 14966.17 00:10:51.457 00:10:53.997 Initializing NVMe Controllers 00:10:53.997 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:53.997 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:53.998 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:53.998 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:53.998 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:53.998 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:53.998 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:53.998 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:53.998 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:53.998 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:53.998 Initialization complete. Launching workers. 00:10:53.998 ======================================================== 00:10:53.998 Latency(us) 00:10:53.998 Device Information : IOPS MiB/s Average min max 00:10:53.998 PCIE (0000:00:06.0) NSID 1 from core 2: 3305.00 12.91 4839.23 1062.27 29526.11 00:10:53.998 PCIE (0000:00:07.0) NSID 1 from core 2: 3305.00 12.91 4840.75 984.53 30187.08 00:10:53.998 PCIE (0000:00:09.0) NSID 1 from core 2: 3305.00 12.91 4840.70 1121.49 30061.13 00:10:53.998 PCIE (0000:00:08.0) NSID 1 from core 2: 3305.00 12.91 4840.65 976.19 27417.49 00:10:53.998 PCIE (0000:00:08.0) NSID 2 from core 2: 3305.00 12.91 4840.36 823.99 27436.30 00:10:53.998 PCIE (0000:00:08.0) NSID 3 from core 2: 3305.00 12.91 4840.55 776.16 30976.31 00:10:53.998 ======================================================== 00:10:53.998 Total : 19830.02 77.46 4840.37 776.16 30976.31 00:10:53.998 00:10:53.998 ************************************ 00:10:53.998 END TEST nvme_multi_secondary 00:10:53.998 ************************************ 00:10:53.998 07:26:02 -- nvme/nvme.sh@65 -- # wait 64687 00:10:53.998 07:26:02 -- nvme/nvme.sh@66 -- # wait 64688 00:10:53.998 00:10:53.998 real 0m10.881s 00:10:53.998 user 0m18.607s 00:10:53.998 sys 0m0.655s 00:10:53.998 07:26:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.998 07:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:53.998 07:26:02 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:53.998 07:26:02 -- nvme/nvme.sh@102 -- # kill_stub 00:10:53.998 07:26:02 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63644 ]] 00:10:53.998 07:26:02 -- common/autotest_common.sh@1076 -- # kill 63644 00:10:53.998 07:26:02 -- common/autotest_common.sh@1077 -- # wait 63644 00:10:53.998 [2024-11-19 07:26:03.239809] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:53.998 [2024-11-19 07:26:03.239863] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:53.998 [2024-11-19 07:26:03.239874] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:53.998 [2024-11-19 07:26:03.239885] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:55.378 [2024-11-19 07:26:04.249344] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:55.378 [2024-11-19 07:26:04.249451] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:55.378 [2024-11-19 07:26:04.249477] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:55.378 [2024-11-19 07:26:04.249499] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:56.316 [2024-11-19 07:26:05.256805] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:56.316 [2024-11-19 07:26:05.256864] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:56.316 [2024-11-19 07:26:05.256875] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:56.316 [2024-11-19 07:26:05.256886] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:57.251 [2024-11-19 07:26:06.266511] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:57.251 [2024-11-19 07:26:06.266566] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:57.252 [2024-11-19 07:26:06.266578] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:57.252 [2024-11-19 07:26:06.266591] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64560) is not found. Dropping the request. 00:10:57.252 07:26:06 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:57.252 07:26:06 -- common/autotest_common.sh@1083 -- # echo 2 00:10:57.252 07:26:06 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:57.252 07:26:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.252 07:26:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.252 07:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:57.252 ************************************ 00:10:57.252 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:57.252 ************************************ 00:10:57.252 07:26:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:57.252 * Looking for test storage... 00:10:57.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:57.510 07:26:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:57.510 07:26:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:57.510 07:26:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:57.510 07:26:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:57.510 07:26:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:57.510 07:26:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:57.510 07:26:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:57.510 07:26:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:57.510 07:26:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:57.510 07:26:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.510 07:26:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:57.510 07:26:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:57.510 07:26:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:57.510 07:26:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:57.510 07:26:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:57.510 07:26:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:57.510 07:26:06 -- scripts/common.sh@344 -- # : 1 00:10:57.510 07:26:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:57.510 07:26:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.510 07:26:06 -- scripts/common.sh@364 -- # decimal 1 00:10:57.510 07:26:06 -- scripts/common.sh@352 -- # local d=1 00:10:57.510 07:26:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.510 07:26:06 -- scripts/common.sh@354 -- # echo 1 00:10:57.510 07:26:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:57.510 07:26:06 -- scripts/common.sh@365 -- # decimal 2 00:10:57.510 07:26:06 -- scripts/common.sh@352 -- # local d=2 00:10:57.510 07:26:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.510 07:26:06 -- scripts/common.sh@354 -- # echo 2 00:10:57.510 07:26:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:57.510 07:26:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:57.510 07:26:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:57.510 07:26:06 -- scripts/common.sh@367 -- # return 0 00:10:57.510 07:26:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.510 07:26:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:57.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.510 --rc genhtml_branch_coverage=1 00:10:57.510 --rc genhtml_function_coverage=1 00:10:57.510 --rc genhtml_legend=1 00:10:57.510 --rc geninfo_all_blocks=1 00:10:57.510 --rc geninfo_unexecuted_blocks=1 00:10:57.510 00:10:57.510 ' 00:10:57.510 07:26:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:57.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.510 --rc genhtml_branch_coverage=1 00:10:57.510 --rc genhtml_function_coverage=1 00:10:57.510 --rc genhtml_legend=1 00:10:57.510 --rc geninfo_all_blocks=1 00:10:57.510 --rc geninfo_unexecuted_blocks=1 00:10:57.510 00:10:57.510 ' 00:10:57.510 07:26:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:57.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.510 --rc genhtml_branch_coverage=1 00:10:57.510 --rc genhtml_function_coverage=1 00:10:57.510 --rc genhtml_legend=1 00:10:57.510 --rc geninfo_all_blocks=1 00:10:57.510 --rc geninfo_unexecuted_blocks=1 00:10:57.510 00:10:57.510 ' 00:10:57.510 07:26:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:57.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.510 --rc genhtml_branch_coverage=1 00:10:57.510 --rc genhtml_function_coverage=1 00:10:57.510 --rc genhtml_legend=1 00:10:57.510 --rc geninfo_all_blocks=1 00:10:57.510 --rc geninfo_unexecuted_blocks=1 00:10:57.510 00:10:57.510 ' 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:57.511 07:26:06 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:57.511 07:26:06 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:57.511 07:26:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:57.511 07:26:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:57.511 07:26:06 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:57.511 07:26:06 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:57.511 07:26:06 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:57.511 07:26:06 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:57.511 07:26:06 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:57.511 07:26:06 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:57.511 07:26:06 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:57.511 07:26:06 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64884 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:57.511 07:26:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64884 00:10:57.511 07:26:06 -- common/autotest_common.sh@829 -- # '[' -z 64884 ']' 00:10:57.511 07:26:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.511 07:26:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.511 07:26:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.511 07:26:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.511 07:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:57.511 [2024-11-19 07:26:06.709821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:57.511 [2024-11-19 07:26:06.710319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64884 ] 00:10:57.769 [2024-11-19 07:26:06.869006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.027 [2024-11-19 07:26:07.053272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:58.027 [2024-11-19 07:26:07.053680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.027 [2024-11-19 07:26:07.054025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.027 [2024-11-19 07:26:07.054085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.027 [2024-11-19 07:26:07.054100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.961 07:26:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.961 07:26:08 -- common/autotest_common.sh@862 -- # return 0 00:10:58.961 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:58.961 07:26:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.961 07:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 nvme0n1 00:10:59.218 07:26:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_S41rk.txt 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:59.218 07:26:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.218 07:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 true 00:10:59.218 07:26:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732001168 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64915 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:59.218 07:26:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:01.116 07:26:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.116 07:26:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.116 [2024-11-19 07:26:10.286037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:01.116 [2024-11-19 07:26:10.286276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:01.116 [2024-11-19 07:26:10.286296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:01.116 [2024-11-19 07:26:10.286307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.116 [2024-11-19 07:26:10.287607] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:01.116 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64915 00:11:01.116 07:26:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64915 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64915 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:01.116 07:26:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.116 07:26:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.116 07:26:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.116 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_S41rk.txt 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:01.117 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_S41rk.txt 00:11:01.377 07:26:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64884 00:11:01.377 07:26:10 -- common/autotest_common.sh@936 -- # '[' -z 64884 ']' 00:11:01.377 07:26:10 -- common/autotest_common.sh@940 -- # kill -0 64884 00:11:01.377 07:26:10 -- common/autotest_common.sh@941 -- # uname 00:11:01.377 07:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.377 07:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64884 00:11:01.377 killing process with pid 64884 00:11:01.377 07:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:01.377 07:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:01.377 07:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64884' 00:11:01.377 07:26:10 -- common/autotest_common.sh@955 -- # kill 64884 00:11:01.377 07:26:10 -- common/autotest_common.sh@960 -- # wait 64884 00:11:02.757 07:26:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:02.757 07:26:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:02.757 ************************************ 00:11:02.757 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:02.757 ************************************ 00:11:02.757 00:11:02.757 real 0m5.129s 00:11:02.757 user 0m18.242s 00:11:02.757 sys 0m0.506s 00:11:02.757 07:26:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.757 07:26:11 -- common/autotest_common.sh@10 -- # set +x 00:11:02.757 07:26:11 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:02.757 07:26:11 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:02.757 07:26:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:02.757 07:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.757 07:26:11 -- common/autotest_common.sh@10 -- # set +x 00:11:02.757 ************************************ 00:11:02.757 START TEST nvme_fio 00:11:02.757 ************************************ 00:11:02.757 07:26:11 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:11:02.757 07:26:11 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:02.757 07:26:11 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:02.757 07:26:11 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:02.757 07:26:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:02.757 07:26:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:02.757 07:26:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:02.757 07:26:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:02.757 07:26:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:02.757 07:26:11 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:02.757 07:26:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:02.757 07:26:11 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:02.757 07:26:11 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:02.757 07:26:11 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:02.757 07:26:11 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:02.757 07:26:11 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:02.757 07:26:11 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:02.757 07:26:11 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:03.018 07:26:12 -- nvme/nvme.sh@41 -- # bs=4096 00:11:03.018 07:26:12 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.018 07:26:12 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.018 07:26:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:03.018 07:26:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:03.018 07:26:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:03.018 07:26:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.018 07:26:12 -- common/autotest_common.sh@1330 -- # shift 00:11:03.018 07:26:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:03.018 07:26:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:03.018 07:26:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.018 07:26:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:03.018 07:26:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:03.018 07:26:12 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:03.018 07:26:12 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:03.018 07:26:12 -- common/autotest_common.sh@1336 -- # break 00:11:03.018 07:26:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:03.018 07:26:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:03.278 fio-3.35 00:11:03.278 Starting 1 thread 00:11:09.888 00:11:09.888 test: (groupid=0, jobs=1): err= 0: pid=65050: Tue Nov 19 07:26:17 2024 00:11:09.888 read: IOPS=23.2k, BW=90.6MiB/s (95.0MB/s)(181MiB/2001msec) 00:11:09.888 slat (nsec): min=3286, max=73963, avg=5082.88, stdev=2475.86 00:11:09.888 clat (usec): min=208, max=9342, avg=2752.47, stdev=938.72 00:11:09.888 lat (usec): min=212, max=9356, avg=2757.55, stdev=940.37 00:11:09.888 clat percentiles (usec): 00:11:09.888 | 1.00th=[ 1893], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:11:09.888 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:11:09.888 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3785], 95.00th=[ 5342], 00:11:09.888 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 8291], 99.95th=[ 8848], 00:11:09.888 | 99.99th=[ 9110] 00:11:09.888 bw ( KiB/s): min=87113, max=96240, per=100.00%, avg=93024.33, stdev=5125.96, samples=3 00:11:09.888 iops : min=21778, max=24060, avg=23256.00, stdev=1281.63, samples=3 00:11:09.888 write: IOPS=23.1k, BW=90.1MiB/s (94.4MB/s)(180MiB/2001msec); 0 zone resets 00:11:09.888 slat (nsec): min=3463, max=80289, avg=5378.26, stdev=2467.04 00:11:09.888 clat (usec): min=199, max=9275, avg=2759.50, stdev=934.29 00:11:09.888 lat (usec): min=204, max=9288, avg=2764.88, stdev=935.95 00:11:09.888 clat percentiles (usec): 00:11:09.888 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:11:09.888 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:11:09.888 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3818], 95.00th=[ 5342], 00:11:09.888 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7898], 99.95th=[ 8717], 00:11:09.888 | 99.99th=[ 9241] 00:11:09.888 bw ( KiB/s): min=86778, max=97424, per=100.00%, avg=93142.00, stdev=5620.09, samples=3 00:11:09.888 iops : min=21694, max=24356, avg=23285.33, stdev=1405.30, samples=3 00:11:09.888 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:11:09.888 lat (msec) : 2=1.75%, 4=89.07%, 10=9.13% 00:11:09.888 cpu : usr=99.05%, sys=0.15%, ctx=4, majf=0, minf=608 00:11:09.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:09.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.888 issued rwts: total=46415,46136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.888 00:11:09.888 Run status group 0 (all jobs): 00:11:09.888 READ: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=181MiB (190MB), run=2001-2001msec 00:11:09.888 WRITE: bw=90.1MiB/s (94.4MB/s), 90.1MiB/s-90.1MiB/s (94.4MB/s-94.4MB/s), io=180MiB (189MB), run=2001-2001msec 00:11:09.888 ----------------------------------------------------- 00:11:09.888 Suppressions used: 00:11:09.888 count bytes template 00:11:09.888 1 32 /usr/src/fio/parse.c 00:11:09.888 1 8 libtcmalloc_minimal.so 00:11:09.888 ----------------------------------------------------- 00:11:09.888 00:11:09.888 07:26:18 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:09.888 07:26:18 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:09.888 07:26:18 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:09.888 07:26:18 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:09.888 07:26:18 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:09.888 07:26:18 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:09.888 07:26:18 -- nvme/nvme.sh@41 -- # bs=4096 00:11:09.888 07:26:18 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:09.888 07:26:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:09.888 07:26:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:09.888 07:26:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:09.888 07:26:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:09.888 07:26:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.888 07:26:18 -- common/autotest_common.sh@1330 -- # shift 00:11:09.888 07:26:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:09.888 07:26:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:09.888 07:26:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:09.888 07:26:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:09.888 07:26:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:09.888 07:26:18 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:09.888 07:26:18 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:09.888 07:26:18 -- common/autotest_common.sh@1336 -- # break 00:11:09.888 07:26:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:09.888 07:26:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:09.888 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:09.888 fio-3.35 00:11:09.888 Starting 1 thread 00:11:16.446 00:11:16.446 test: (groupid=0, jobs=1): err= 0: pid=65120: Tue Nov 19 07:26:24 2024 00:11:16.446 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(180MiB/2001msec) 00:11:16.446 slat (nsec): min=3307, max=72768, avg=4946.83, stdev=2201.92 00:11:16.446 clat (usec): min=239, max=9608, avg=2762.43, stdev=845.59 00:11:16.446 lat (usec): min=243, max=9619, avg=2767.38, stdev=846.99 00:11:16.446 clat percentiles (usec): 00:11:16.446 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2409], 00:11:16.446 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:11:16.446 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 3294], 95.00th=[ 4883], 00:11:16.446 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 8717], 99.95th=[ 8979], 00:11:16.446 | 99.99th=[ 9372] 00:11:16.446 bw ( KiB/s): min=90304, max=96192, per=99.91%, avg=92274.67, stdev=3392.53, samples=3 00:11:16.446 iops : min=22576, max=24048, avg=23068.67, stdev=848.13, samples=3 00:11:16.446 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(179MiB/2001msec); 0 zone resets 00:11:16.446 slat (usec): min=3, max=124, avg= 5.30, stdev= 2.39 00:11:16.446 clat (usec): min=211, max=9612, avg=2776.38, stdev=860.18 00:11:16.446 lat (usec): min=215, max=9624, avg=2781.69, stdev=861.61 00:11:16.446 clat percentiles (usec): 00:11:16.446 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2442], 00:11:16.446 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:11:16.446 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3425], 95.00th=[ 4948], 00:11:16.446 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 8586], 99.95th=[ 8848], 00:11:16.446 | 99.99th=[ 9372] 00:11:16.446 bw ( KiB/s): min=89720, max=95424, per=100.00%, avg=92378.67, stdev=2871.59, samples=3 00:11:16.446 iops : min=22430, max=23856, avg=23094.67, stdev=717.90, samples=3 00:11:16.446 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:11:16.446 lat (msec) : 2=1.02%, 4=91.54%, 10=7.38% 00:11:16.446 cpu : usr=99.25%, sys=0.05%, ctx=3, majf=0, minf=608 00:11:16.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:16.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.446 issued rwts: total=46202,45946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.446 00:11:16.446 Run status group 0 (all jobs): 00:11:16.446 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=180MiB (189MB), run=2001-2001msec 00:11:16.446 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=179MiB (188MB), run=2001-2001msec 00:11:16.446 ----------------------------------------------------- 00:11:16.446 Suppressions used: 00:11:16.446 count bytes template 00:11:16.446 1 32 /usr/src/fio/parse.c 00:11:16.446 1 8 libtcmalloc_minimal.so 00:11:16.446 ----------------------------------------------------- 00:11:16.446 00:11:16.446 07:26:25 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:16.446 07:26:25 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:16.446 07:26:25 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:16.446 07:26:25 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:16.446 07:26:25 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:16.446 07:26:25 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:16.446 07:26:25 -- nvme/nvme.sh@41 -- # bs=4096 00:11:16.446 07:26:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:16.446 07:26:25 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:16.446 07:26:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:16.446 07:26:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:16.446 07:26:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:16.446 07:26:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.446 07:26:25 -- common/autotest_common.sh@1330 -- # shift 00:11:16.446 07:26:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:16.446 07:26:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:16.446 07:26:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:16.446 07:26:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.446 07:26:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:16.446 07:26:25 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:16.447 07:26:25 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:16.447 07:26:25 -- common/autotest_common.sh@1336 -- # break 00:11:16.447 07:26:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:16.447 07:26:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:16.447 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:16.447 fio-3.35 00:11:16.447 Starting 1 thread 00:11:23.015 00:11:23.015 test: (groupid=0, jobs=1): err= 0: pid=65187: Tue Nov 19 07:26:32 2024 00:11:23.015 read: IOPS=23.2k, BW=90.8MiB/s (95.2MB/s)(182MiB/2001msec) 00:11:23.015 slat (nsec): min=3294, max=82459, avg=5073.86, stdev=2446.28 00:11:23.015 clat (usec): min=207, max=10081, avg=2751.10, stdev=941.84 00:11:23.015 lat (usec): min=211, max=10133, avg=2756.17, stdev=943.33 00:11:23.015 clat percentiles (usec): 00:11:23.015 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2343], 00:11:23.015 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:11:23.015 | 70.00th=[ 2573], 80.00th=[ 2802], 90.00th=[ 3884], 95.00th=[ 5080], 00:11:23.015 | 99.00th=[ 6456], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 8717], 00:11:23.015 | 99.99th=[ 9896] 00:11:23.015 bw ( KiB/s): min=86656, max=100040, per=100.00%, avg=93176.00, stdev=6698.63, samples=3 00:11:23.015 iops : min=21664, max=25010, avg=23294.00, stdev=1674.66, samples=3 00:11:23.015 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(180MiB/2001msec); 0 zone resets 00:11:23.015 slat (nsec): min=3428, max=62367, avg=5371.52, stdev=2407.07 00:11:23.015 clat (usec): min=222, max=10007, avg=2752.54, stdev=941.83 00:11:23.015 lat (usec): min=227, max=10020, avg=2757.91, stdev=943.30 00:11:23.015 clat percentiles (usec): 00:11:23.015 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2343], 00:11:23.015 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:11:23.015 | 70.00th=[ 2573], 80.00th=[ 2769], 90.00th=[ 3884], 95.00th=[ 5080], 00:11:23.015 | 99.00th=[ 6456], 99.50th=[ 7046], 99.90th=[ 8586], 99.95th=[ 8717], 00:11:23.015 | 99.99th=[ 9634] 00:11:23.015 bw ( KiB/s): min=87744, max=99888, per=100.00%, avg=93261.33, stdev=6147.53, samples=3 00:11:23.015 iops : min=21936, max=24972, avg=23315.33, stdev=1536.88, samples=3 00:11:23.015 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.04% 00:11:23.015 lat (msec) : 2=3.95%, 4=86.67%, 10=9.29%, 20=0.01% 00:11:23.015 cpu : usr=99.25%, sys=0.00%, ctx=6, majf=0, minf=608 00:11:23.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:23.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.015 issued rwts: total=46488,46198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.015 00:11:23.015 Run status group 0 (all jobs): 00:11:23.015 READ: bw=90.8MiB/s (95.2MB/s), 90.8MiB/s-90.8MiB/s (95.2MB/s-95.2MB/s), io=182MiB (190MB), run=2001-2001msec 00:11:23.015 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=180MiB (189MB), run=2001-2001msec 00:11:23.274 ----------------------------------------------------- 00:11:23.274 Suppressions used: 00:11:23.274 count bytes template 00:11:23.274 1 32 /usr/src/fio/parse.c 00:11:23.274 1 8 libtcmalloc_minimal.so 00:11:23.274 ----------------------------------------------------- 00:11:23.274 00:11:23.274 07:26:32 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:23.274 07:26:32 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:23.274 07:26:32 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:23.274 07:26:32 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:23.532 07:26:32 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:23.532 07:26:32 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:23.791 07:26:32 -- nvme/nvme.sh@41 -- # bs=4096 00:11:23.791 07:26:32 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:23.791 07:26:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:23.791 07:26:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:23.791 07:26:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:23.791 07:26:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:23.791 07:26:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:23.791 07:26:32 -- common/autotest_common.sh@1330 -- # shift 00:11:23.791 07:26:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:23.791 07:26:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:23.791 07:26:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:23.791 07:26:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:23.791 07:26:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:23.791 07:26:32 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:23.791 07:26:32 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:23.791 07:26:32 -- common/autotest_common.sh@1336 -- # break 00:11:23.791 07:26:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:23.791 07:26:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:23.791 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:23.791 fio-3.35 00:11:23.791 Starting 1 thread 00:11:30.373 00:11:30.373 test: (groupid=0, jobs=1): err= 0: pid=65268: Tue Nov 19 07:26:39 2024 00:11:30.373 read: IOPS=16.6k, BW=64.8MiB/s (68.0MB/s)(130MiB/2001msec) 00:11:30.373 slat (nsec): min=4235, max=82035, avg=6178.04, stdev=3397.67 00:11:30.373 clat (usec): min=349, max=11644, avg=3825.95, stdev=1329.98 00:11:30.373 lat (usec): min=359, max=11705, avg=3832.13, stdev=1331.42 00:11:30.373 clat percentiles (usec): 00:11:30.373 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2802], 00:11:30.373 | 30.00th=[ 2966], 40.00th=[ 3130], 50.00th=[ 3294], 60.00th=[ 3556], 00:11:30.373 | 70.00th=[ 4228], 80.00th=[ 5014], 90.00th=[ 5800], 95.00th=[ 6456], 00:11:30.373 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[10028], 00:11:30.373 | 99.99th=[10683] 00:11:30.373 bw ( KiB/s): min=58035, max=68856, per=97.22%, avg=64539.67, stdev=5732.81, samples=3 00:11:30.373 iops : min=14508, max=17214, avg=16134.67, stdev=1433.63, samples=3 00:11:30.373 write: IOPS=16.6k, BW=64.9MiB/s (68.1MB/s)(130MiB/2001msec); 0 zone resets 00:11:30.373 slat (nsec): min=4302, max=87907, avg=6360.46, stdev=3292.51 00:11:30.373 clat (usec): min=306, max=10737, avg=3849.14, stdev=1334.75 00:11:30.373 lat (usec): min=315, max=10752, avg=3855.50, stdev=1336.18 00:11:30.373 clat percentiles (usec): 00:11:30.373 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2835], 00:11:30.373 | 30.00th=[ 2966], 40.00th=[ 3130], 50.00th=[ 3326], 60.00th=[ 3589], 00:11:30.373 | 70.00th=[ 4228], 80.00th=[ 5014], 90.00th=[ 5866], 95.00th=[ 6521], 00:11:30.373 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[10159], 00:11:30.373 | 99.99th=[10552] 00:11:30.373 bw ( KiB/s): min=58355, max=68352, per=96.64%, avg=64270.33, stdev=5244.69, samples=3 00:11:30.373 iops : min=14588, max=17088, avg=16067.33, stdev=1311.59, samples=3 00:11:30.373 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:30.373 lat (msec) : 2=0.51%, 4=66.93%, 10=32.48%, 20=0.05% 00:11:30.373 cpu : usr=98.65%, sys=0.20%, ctx=5, majf=0, minf=606 00:11:30.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:30.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:30.373 issued rwts: total=33209,33268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:30.373 00:11:30.373 Run status group 0 (all jobs): 00:11:30.373 READ: bw=64.8MiB/s (68.0MB/s), 64.8MiB/s-64.8MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:11:30.373 WRITE: bw=64.9MiB/s (68.1MB/s), 64.9MiB/s-64.9MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:11:30.635 ----------------------------------------------------- 00:11:30.635 Suppressions used: 00:11:30.635 count bytes template 00:11:30.635 1 32 /usr/src/fio/parse.c 00:11:30.635 1 8 libtcmalloc_minimal.so 00:11:30.635 ----------------------------------------------------- 00:11:30.635 00:11:30.635 ************************************ 00:11:30.635 END TEST nvme_fio 00:11:30.635 ************************************ 00:11:30.635 07:26:39 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:30.635 07:26:39 -- nvme/nvme.sh@46 -- # true 00:11:30.635 00:11:30.635 real 0m28.049s 00:11:30.635 user 0m16.459s 00:11:30.635 sys 0m21.666s 00:11:30.635 07:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.635 07:26:39 -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 ************************************ 00:11:30.635 END TEST nvme 00:11:30.635 ************************************ 00:11:30.635 00:11:30.635 real 1m41.195s 00:11:30.635 user 3m39.043s 00:11:30.635 sys 0m32.117s 00:11:30.635 07:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.635 07:26:39 -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 07:26:39 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:30.635 07:26:39 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:30.635 07:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.635 07:26:39 -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 ************************************ 00:11:30.635 START TEST nvme_scc 00:11:30.635 ************************************ 00:11:30.635 07:26:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:30.635 * Looking for test storage... 00:11:30.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:30.635 07:26:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.635 07:26:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.635 07:26:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.635 07:26:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.635 07:26:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.635 07:26:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.635 07:26:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.635 07:26:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.635 07:26:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.635 07:26:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.635 07:26:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.635 07:26:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.635 07:26:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.635 07:26:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.635 07:26:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.635 07:26:39 -- scripts/common.sh@344 -- # : 1 00:11:30.635 07:26:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.635 07:26:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.635 07:26:39 -- scripts/common.sh@364 -- # decimal 1 00:11:30.635 07:26:39 -- scripts/common.sh@352 -- # local d=1 00:11:30.635 07:26:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.635 07:26:39 -- scripts/common.sh@354 -- # echo 1 00:11:30.635 07:26:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.635 07:26:39 -- scripts/common.sh@365 -- # decimal 2 00:11:30.635 07:26:39 -- scripts/common.sh@352 -- # local d=2 00:11:30.635 07:26:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.635 07:26:39 -- scripts/common.sh@354 -- # echo 2 00:11:30.635 07:26:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.635 07:26:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.635 07:26:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.635 07:26:39 -- scripts/common.sh@367 -- # return 0 00:11:30.635 07:26:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.635 --rc genhtml_branch_coverage=1 00:11:30.635 --rc genhtml_function_coverage=1 00:11:30.635 --rc genhtml_legend=1 00:11:30.635 --rc geninfo_all_blocks=1 00:11:30.635 --rc geninfo_unexecuted_blocks=1 00:11:30.635 00:11:30.635 ' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.635 --rc genhtml_branch_coverage=1 00:11:30.635 --rc genhtml_function_coverage=1 00:11:30.635 --rc genhtml_legend=1 00:11:30.635 --rc geninfo_all_blocks=1 00:11:30.635 --rc geninfo_unexecuted_blocks=1 00:11:30.635 00:11:30.635 ' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.635 --rc genhtml_branch_coverage=1 00:11:30.635 --rc genhtml_function_coverage=1 00:11:30.635 --rc genhtml_legend=1 00:11:30.635 --rc geninfo_all_blocks=1 00:11:30.635 --rc geninfo_unexecuted_blocks=1 00:11:30.635 00:11:30.635 ' 00:11:30.635 07:26:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.635 --rc genhtml_branch_coverage=1 00:11:30.635 --rc genhtml_function_coverage=1 00:11:30.635 --rc genhtml_legend=1 00:11:30.635 --rc geninfo_all_blocks=1 00:11:30.635 --rc geninfo_unexecuted_blocks=1 00:11:30.635 00:11:30.635 ' 00:11:30.635 07:26:39 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:30.635 07:26:39 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:30.635 07:26:39 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:30.897 07:26:39 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:30.897 07:26:39 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.897 07:26:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.897 07:26:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.897 07:26:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.897 07:26:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 07:26:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 07:26:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 07:26:39 -- paths/export.sh@5 -- # export PATH 00:11:30.897 07:26:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.897 07:26:39 -- nvme/functions.sh@10 -- # ctrls=() 00:11:30.897 07:26:39 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:30.897 07:26:39 -- nvme/functions.sh@11 -- # nvmes=() 00:11:30.897 07:26:39 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:30.897 07:26:39 -- nvme/functions.sh@12 -- # bdfs=() 00:11:30.897 07:26:39 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:30.897 07:26:39 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:30.897 07:26:39 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:30.897 07:26:39 -- nvme/functions.sh@14 -- # nvme_name= 00:11:30.897 07:26:39 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.897 07:26:39 -- nvme/nvme_scc.sh@12 -- # uname 00:11:30.897 07:26:39 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:30.897 07:26:39 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:30.897 07:26:39 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:31.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:31.155 Waiting for block devices as requested 00:11:31.155 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.416 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.416 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.416 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.719 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:36.719 07:26:45 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:36.719 07:26:45 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:36.719 07:26:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:36.719 07:26:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:36.719 07:26:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:36.719 07:26:45 -- scripts/common.sh@15 -- # local i 00:11:36.719 07:26:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:36.719 07:26:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:36.719 07:26:45 -- scripts/common.sh@24 -- # return 0 00:11:36.719 07:26:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:36.719 07:26:45 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:36.719 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.719 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.719 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.719 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.719 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.720 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.720 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:36.720 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:36.721 07:26:45 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.721 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.721 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:36.722 07:26:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:36.722 07:26:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:36.722 07:26:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:36.722 07:26:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:36.722 07:26:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:36.722 07:26:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:36.722 07:26:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:36.722 07:26:45 -- scripts/common.sh@15 -- # local i 00:11:36.722 07:26:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:36.722 07:26:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:36.722 07:26:45 -- scripts/common.sh@24 -- # return 0 00:11:36.722 07:26:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:36.722 07:26:45 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:36.722 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.722 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.722 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.722 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.722 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:36.723 07:26:45 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.723 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.723 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:36.724 07:26:45 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.724 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.724 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.725 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.725 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:36.725 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:36.726 07:26:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:36.726 07:26:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:36.726 07:26:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:36.726 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.726 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.726 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.726 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:36.726 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:36.727 07:26:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:36.727 07:26:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:36.727 07:26:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:36.727 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.727 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:36.727 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.727 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.727 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.728 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.728 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:36.728 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:36.729 07:26:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:36.729 07:26:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:36.729 07:26:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:36.729 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.729 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.729 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:36.729 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:36.729 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:36.730 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.730 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.730 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:36.731 07:26:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:36.731 07:26:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:36.731 07:26:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:36.731 07:26:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:36.731 07:26:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:36.731 07:26:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:36.731 07:26:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:36.731 07:26:45 -- scripts/common.sh@15 -- # local i 00:11:36.731 07:26:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:36.731 07:26:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:36.731 07:26:45 -- scripts/common.sh@24 -- # return 0 00:11:36.731 07:26:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:36.731 07:26:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:36.731 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.731 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:36.731 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.731 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.731 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.732 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:36.732 07:26:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:36.732 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.733 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:36.733 07:26:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.733 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:36.734 07:26:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.734 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.734 07:26:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:36.735 07:26:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:36.735 07:26:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:36.735 07:26:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:36.735 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.735 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.735 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:36.735 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:36.735 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:36.736 07:26:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.736 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:36.736 07:26:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:36.736 07:26:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:36.736 07:26:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:36.736 07:26:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:36.736 07:26:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:36.736 07:26:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:36.736 07:26:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:36.736 07:26:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:36.736 07:26:45 -- scripts/common.sh@15 -- # local i 00:11:36.736 07:26:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:36.736 07:26:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:36.736 07:26:45 -- scripts/common.sh@24 -- # return 0 00:11:36.736 07:26:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:36.736 07:26:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:36.736 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:36.736 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.737 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.737 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:36.737 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.737 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.738 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:36.738 07:26:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.738 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.739 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:36.739 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:36.739 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:36.740 07:26:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:36.740 07:26:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:36.740 07:26:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:36.740 07:26:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@18 -- # shift 00:11:36.740 07:26:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.740 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.740 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.740 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:36.741 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.741 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.741 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.742 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.742 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.742 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.742 07:26:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:36.742 07:26:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:36.742 07:26:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:36.742 07:26:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:36.742 07:26:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:36.742 07:26:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:36.742 07:26:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:36.742 07:26:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:36.742 07:26:45 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:36.742 07:26:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:36.742 07:26:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:36.742 07:26:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:36.742 07:26:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:36.742 07:26:45 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:36.742 07:26:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:36.742 07:26:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:37.002 07:26:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:37.002 07:26:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # echo nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:37.003 07:26:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:37.003 07:26:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:37.003 07:26:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:37.003 07:26:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # echo nvme0 00:11:37.003 07:26:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:37.003 07:26:45 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:37.003 07:26:45 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:37.003 07:26:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:37.003 07:26:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # echo nvme3 00:11:37.003 07:26:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:37.003 07:26:45 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:37.003 07:26:45 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:37.003 07:26:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:37.003 07:26:45 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:37.003 07:26:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:37.003 07:26:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:37.003 07:26:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:37.003 07:26:45 -- nvme/functions.sh@197 -- # echo nvme2 00:11:37.003 07:26:45 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:37.003 07:26:45 -- nvme/functions.sh@206 -- # echo nvme1 00:11:37.003 07:26:45 -- nvme/functions.sh@207 -- # return 0 00:11:37.003 07:26:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:37.003 07:26:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:37.003 07:26:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:37.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:37.942 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.942 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.942 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.942 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.942 07:26:47 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:37.942 07:26:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:37.942 07:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.942 07:26:47 -- common/autotest_common.sh@10 -- # set +x 00:11:37.942 ************************************ 00:11:37.942 START TEST nvme_simple_copy 00:11:37.942 ************************************ 00:11:37.942 07:26:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:38.200 Initializing NVMe Controllers 00:11:38.200 Attaching to 0000:00:08.0 00:11:38.200 Controller supports SCC. Attached to 0000:00:08.0 00:11:38.200 Namespace ID: 1 size: 4GB 00:11:38.200 Initialization complete. 00:11:38.200 00:11:38.200 Controller QEMU NVMe Ctrl (12342 ) 00:11:38.200 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:38.200 Namespace Block Size:4096 00:11:38.200 Writing LBAs 0 to 63 with Random Data 00:11:38.200 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:38.200 LBAs matching Written Data: 64 00:11:38.200 ************************************ 00:11:38.200 00:11:38.200 real 0m0.267s 00:11:38.200 user 0m0.089s 00:11:38.200 sys 0m0.075s 00:11:38.200 07:26:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.200 07:26:47 -- common/autotest_common.sh@10 -- # set +x 00:11:38.200 END TEST nvme_simple_copy 00:11:38.200 ************************************ 00:11:38.458 ************************************ 00:11:38.458 END TEST nvme_scc 00:11:38.458 ************************************ 00:11:38.458 00:11:38.458 real 0m7.725s 00:11:38.458 user 0m1.096s 00:11:38.458 sys 0m1.396s 00:11:38.458 07:26:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.458 07:26:47 -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 07:26:47 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:38.458 07:26:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:38.458 07:26:47 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:38.458 07:26:47 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:38.458 07:26:47 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:38.458 07:26:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.458 07:26:47 -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 ************************************ 00:11:38.458 START TEST nvme_fdp 00:11:38.458 ************************************ 00:11:38.458 07:26:47 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:38.458 * Looking for test storage... 00:11:38.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:38.458 07:26:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:38.458 07:26:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:38.458 07:26:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:38.458 07:26:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:38.458 07:26:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:38.458 07:26:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:38.458 07:26:47 -- scripts/common.sh@335 -- # IFS=.-: 00:11:38.458 07:26:47 -- scripts/common.sh@335 -- # read -ra ver1 00:11:38.458 07:26:47 -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.458 07:26:47 -- scripts/common.sh@336 -- # read -ra ver2 00:11:38.458 07:26:47 -- scripts/common.sh@337 -- # local 'op=<' 00:11:38.458 07:26:47 -- scripts/common.sh@339 -- # ver1_l=2 00:11:38.458 07:26:47 -- scripts/common.sh@340 -- # ver2_l=1 00:11:38.458 07:26:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:38.458 07:26:47 -- scripts/common.sh@343 -- # case "$op" in 00:11:38.458 07:26:47 -- scripts/common.sh@344 -- # : 1 00:11:38.458 07:26:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:38.458 07:26:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.458 07:26:47 -- scripts/common.sh@364 -- # decimal 1 00:11:38.458 07:26:47 -- scripts/common.sh@352 -- # local d=1 00:11:38.458 07:26:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.458 07:26:47 -- scripts/common.sh@354 -- # echo 1 00:11:38.458 07:26:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:38.458 07:26:47 -- scripts/common.sh@365 -- # decimal 2 00:11:38.458 07:26:47 -- scripts/common.sh@352 -- # local d=2 00:11:38.458 07:26:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.458 07:26:47 -- scripts/common.sh@354 -- # echo 2 00:11:38.458 07:26:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:38.458 07:26:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:38.458 07:26:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:38.458 07:26:47 -- scripts/common.sh@367 -- # return 0 00:11:38.458 07:26:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.458 --rc genhtml_branch_coverage=1 00:11:38.458 --rc genhtml_function_coverage=1 00:11:38.458 --rc genhtml_legend=1 00:11:38.458 --rc geninfo_all_blocks=1 00:11:38.458 --rc geninfo_unexecuted_blocks=1 00:11:38.458 00:11:38.458 ' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.458 --rc genhtml_branch_coverage=1 00:11:38.458 --rc genhtml_function_coverage=1 00:11:38.458 --rc genhtml_legend=1 00:11:38.458 --rc geninfo_all_blocks=1 00:11:38.458 --rc geninfo_unexecuted_blocks=1 00:11:38.458 00:11:38.458 ' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.458 --rc genhtml_branch_coverage=1 00:11:38.458 --rc genhtml_function_coverage=1 00:11:38.458 --rc genhtml_legend=1 00:11:38.458 --rc geninfo_all_blocks=1 00:11:38.458 --rc geninfo_unexecuted_blocks=1 00:11:38.458 00:11:38.458 ' 00:11:38.458 07:26:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:38.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.458 --rc genhtml_branch_coverage=1 00:11:38.458 --rc genhtml_function_coverage=1 00:11:38.458 --rc genhtml_legend=1 00:11:38.458 --rc geninfo_all_blocks=1 00:11:38.458 --rc geninfo_unexecuted_blocks=1 00:11:38.458 00:11:38.458 ' 00:11:38.458 07:26:47 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:38.458 07:26:47 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:38.458 07:26:47 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:38.458 07:26:47 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:38.458 07:26:47 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.458 07:26:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.458 07:26:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.458 07:26:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.458 07:26:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.458 07:26:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.459 07:26:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.459 07:26:47 -- paths/export.sh@5 -- # export PATH 00:11:38.459 07:26:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.459 07:26:47 -- nvme/functions.sh@10 -- # ctrls=() 00:11:38.459 07:26:47 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:38.459 07:26:47 -- nvme/functions.sh@11 -- # nvmes=() 00:11:38.459 07:26:47 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:38.459 07:26:47 -- nvme/functions.sh@12 -- # bdfs=() 00:11:38.459 07:26:47 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:38.459 07:26:47 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:38.459 07:26:47 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:38.459 07:26:47 -- nvme/functions.sh@14 -- # nvme_name= 00:11:38.459 07:26:47 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.459 07:26:47 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:39.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.023 Waiting for block devices as requested 00:11:39.023 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.023 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.280 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.280 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:44.551 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:44.551 07:26:53 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:44.551 07:26:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:44.551 07:26:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.551 07:26:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:44.551 07:26:53 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:44.551 07:26:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:44.551 07:26:53 -- scripts/common.sh@15 -- # local i 00:11:44.551 07:26:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:44.551 07:26:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:44.551 07:26:53 -- scripts/common.sh@24 -- # return 0 00:11:44.551 07:26:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:44.551 07:26:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:44.551 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:44.551 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.551 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:44.551 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.551 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:44.551 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.552 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:44.552 07:26:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.552 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.553 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:44.553 07:26:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.553 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:44.554 07:26:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:44.554 07:26:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:44.554 07:26:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:44.554 07:26:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:44.554 07:26:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.554 07:26:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:44.554 07:26:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:44.554 07:26:53 -- scripts/common.sh@15 -- # local i 00:11:44.554 07:26:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:44.554 07:26:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:44.554 07:26:53 -- scripts/common.sh@24 -- # return 0 00:11:44.554 07:26:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:44.554 07:26:53 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:44.554 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.554 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.554 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:44.554 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:44.554 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.555 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:44.555 07:26:53 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:44.555 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.556 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.556 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.556 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:44.557 07:26:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:44.557 07:26:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:44.557 07:26:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:44.557 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.557 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.557 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.557 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:44.557 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.558 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.558 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.558 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:44.559 07:26:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:44.559 07:26:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:44.559 07:26:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:44.559 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.559 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.559 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:44.559 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:44.559 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:44.560 07:26:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:44.560 07:26:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:44.560 07:26:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:44.560 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.560 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.560 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.560 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:44.560 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.561 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.561 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.561 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:44.562 07:26:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:44.562 07:26:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:44.562 07:26:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:44.562 07:26:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:44.562 07:26:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.562 07:26:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:44.562 07:26:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:44.562 07:26:53 -- scripts/common.sh@15 -- # local i 00:11:44.562 07:26:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:44.562 07:26:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:44.562 07:26:53 -- scripts/common.sh@24 -- # return 0 00:11:44.562 07:26:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:44.562 07:26:53 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:44.562 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.562 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.562 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.562 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.562 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.563 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.563 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:44.563 07:26:53 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:44.564 07:26:53 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:44.564 07:26:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:44.564 07:26:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:44.564 07:26:53 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:44.564 07:26:53 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:44.564 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:44.564 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.564 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.564 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.565 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:44.565 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:44.565 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:44.566 07:26:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:44.566 07:26:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:44.566 07:26:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:44.566 07:26:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:44.566 07:26:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.566 07:26:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:44.566 07:26:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:44.566 07:26:53 -- scripts/common.sh@15 -- # local i 00:11:44.566 07:26:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:44.566 07:26:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:44.566 07:26:53 -- scripts/common.sh@24 -- # return 0 00:11:44.566 07:26:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:44.566 07:26:53 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:44.566 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.566 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.566 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:44.566 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.566 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.567 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:44.567 07:26:53 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:44.567 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.568 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:44.568 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:44.568 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:44.569 07:26:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:44.569 07:26:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:44.569 07:26:53 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:44.569 07:26:53 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@18 -- # shift 00:11:44.569 07:26:53 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.569 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:44.569 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.569 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.570 07:26:53 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:44.570 07:26:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.570 07:26:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:44.570 07:26:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:44.570 07:26:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:44.570 07:26:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:44.570 07:26:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:44.570 07:26:53 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:44.570 07:26:53 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:44.570 07:26:53 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:44.570 07:26:53 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:44.570 07:26:53 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:44.570 07:26:53 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:44.570 07:26:53 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:44.570 07:26:53 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:44.570 07:26:53 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:44.570 07:26:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:44.570 07:26:53 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:44.570 07:26:53 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:44.570 07:26:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:44.570 07:26:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:44.570 07:26:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:44.570 07:26:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:44.570 07:26:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:44.570 07:26:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:44.570 07:26:53 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:44.570 07:26:53 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:44.570 07:26:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:44.570 07:26:53 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:44.570 07:26:53 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:44.570 07:26:53 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:44.570 07:26:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:44.570 07:26:53 -- nvme/functions.sh@197 -- # echo nvme0 00:11:44.570 07:26:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:44.570 07:26:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:44.570 07:26:53 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:44.570 07:26:53 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:44.570 07:26:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:44.570 07:26:53 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:44.571 07:26:53 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:44.571 07:26:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:44.571 07:26:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:44.571 07:26:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:44.571 07:26:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:44.571 07:26:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:44.571 07:26:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:44.571 07:26:53 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:44.571 07:26:53 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:44.571 07:26:53 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:44.571 07:26:53 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:44.571 07:26:53 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:44.571 07:26:53 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:44.571 07:26:53 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:44.571 07:26:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:44.571 07:26:53 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:44.571 07:26:53 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:44.571 07:26:53 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:44.571 07:26:53 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:44.571 07:26:53 -- nvme/functions.sh@204 -- # trap - ERR 00:11:44.571 07:26:53 -- nvme/functions.sh@204 -- # print_backtrace 00:11:44.571 07:26:53 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:44.571 07:26:53 -- common/autotest_common.sh@1142 -- # return 0 00:11:44.829 07:26:53 -- nvme/functions.sh@204 -- # trap - ERR 00:11:44.829 07:26:53 -- nvme/functions.sh@204 -- # print_backtrace 00:11:44.829 07:26:53 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:44.829 07:26:53 -- common/autotest_common.sh@1142 -- # return 0 00:11:44.829 07:26:53 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:44.829 07:26:53 -- nvme/functions.sh@206 -- # echo nvme0 00:11:44.829 07:26:53 -- nvme/functions.sh@207 -- # return 0 00:11:44.829 07:26:53 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:44.829 07:26:53 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:44.829 07:26:53 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:45.651 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.651 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.651 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.651 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.651 07:26:54 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:45.651 07:26:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:45.651 07:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.651 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.651 ************************************ 00:11:45.651 START TEST nvme_flexible_data_placement 00:11:45.651 ************************************ 00:11:45.651 07:26:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:45.909 Initializing NVMe Controllers 00:11:45.909 Attaching to 0000:00:09.0 00:11:45.909 Controller supports FDP Attached to 0000:00:09.0 00:11:45.909 Namespace ID: 1 Endurance Group ID: 1 00:11:45.909 Initialization complete. 00:11:45.909 00:11:45.909 ================================== 00:11:45.909 == FDP tests for Namespace: #01 == 00:11:45.909 ================================== 00:11:45.909 00:11:45.909 Get Feature: FDP: 00:11:45.909 ================= 00:11:45.909 Enabled: Yes 00:11:45.909 FDP configuration Index: 0 00:11:45.909 00:11:45.909 FDP configurations log page 00:11:45.909 =========================== 00:11:45.909 Number of FDP configurations: 1 00:11:45.909 Version: 0 00:11:45.909 Size: 112 00:11:45.909 FDP Configuration Descriptor: 0 00:11:45.909 Descriptor Size: 96 00:11:45.909 Reclaim Group Identifier format: 2 00:11:45.909 FDP Volatile Write Cache: Not Present 00:11:45.909 FDP Configuration: Valid 00:11:45.909 Vendor Specific Size: 0 00:11:45.909 Number of Reclaim Groups: 2 00:11:45.909 Number of Recalim Unit Handles: 8 00:11:45.909 Max Placement Identifiers: 128 00:11:45.909 Number of Namespaces Suppprted: 256 00:11:45.909 Reclaim unit Nominal Size: 6000000 bytes 00:11:45.909 Estimated Reclaim Unit Time Limit: Not Reported 00:11:45.909 RUH Desc #000: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #001: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #002: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #003: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #004: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #005: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #006: RUH Type: Initially Isolated 00:11:45.909 RUH Desc #007: RUH Type: Initially Isolated 00:11:45.909 00:11:45.909 FDP reclaim unit handle usage log page 00:11:45.909 ====================================== 00:11:45.909 Number of Reclaim Unit Handles: 8 00:11:45.909 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:45.909 RUH Usage Desc #001: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #002: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #003: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #004: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #005: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #006: RUH Attributes: Unused 00:11:45.909 RUH Usage Desc #007: RUH Attributes: Unused 00:11:45.909 00:11:45.909 FDP statistics log page 00:11:45.909 ======================= 00:11:45.909 Host bytes with metadata written: 1005617152 00:11:45.909 Media bytes with metadata written: 1005895680 00:11:45.909 Media bytes erased: 0 00:11:45.909 00:11:45.909 FDP Reclaim unit handle status 00:11:45.909 ============================== 00:11:45.909 Number of RUHS descriptors: 2 00:11:45.909 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000000f8 00:11:45.909 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:45.909 00:11:45.909 FDP write on placement id: 0 success 00:11:45.909 00:11:45.909 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:45.909 00:11:45.909 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:45.909 00:11:45.909 Get Feature: FDP Events for Placement handle: #0 00:11:45.909 ======================== 00:11:45.909 Number of FDP Events: 6 00:11:45.909 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:45.909 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:45.909 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:45.909 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:45.909 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:45.909 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:45.909 00:11:45.909 FDP events log page 00:11:45.909 =================== 00:11:45.909 Number of FDP events: 1 00:11:45.909 FDP Event #0: 00:11:45.909 Event Type: RU Not Written to Capacity 00:11:45.909 Placement Identifier: Valid 00:11:45.909 NSID: Valid 00:11:45.909 Location: Valid 00:11:45.909 Placement Identifier: 0 00:11:45.909 Event Timestamp: 9 00:11:45.909 Namespace Identifier: 1 00:11:45.909 Reclaim Group Identifier: 0 00:11:45.909 Reclaim Unit Handle Identifier: 0 00:11:45.909 00:11:45.909 FDP test passed 00:11:45.909 00:11:45.909 ************************************ 00:11:45.909 real 0m0.226s 00:11:45.909 user 0m0.069s 00:11:45.909 sys 0m0.054s 00:11:45.909 07:26:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.909 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:45.909 END TEST nvme_flexible_data_placement 00:11:45.909 ************************************ 00:11:46.167 ************************************ 00:11:46.167 END TEST nvme_fdp 00:11:46.167 ************************************ 00:11:46.167 00:11:46.167 real 0m7.639s 00:11:46.167 user 0m1.026s 00:11:46.167 sys 0m1.372s 00:11:46.168 07:26:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.168 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:46.168 07:26:55 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:46.168 07:26:55 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:46.168 07:26:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.168 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:46.168 ************************************ 00:11:46.168 START TEST nvme_rpc 00:11:46.168 ************************************ 00:11:46.168 07:26:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:46.168 * Looking for test storage... 00:11:46.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:46.168 07:26:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:46.168 07:26:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:46.168 07:26:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:46.168 07:26:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:46.168 07:26:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:46.168 07:26:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:46.168 07:26:55 -- scripts/common.sh@335 -- # IFS=.-: 00:11:46.168 07:26:55 -- scripts/common.sh@335 -- # read -ra ver1 00:11:46.168 07:26:55 -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.168 07:26:55 -- scripts/common.sh@336 -- # read -ra ver2 00:11:46.168 07:26:55 -- scripts/common.sh@337 -- # local 'op=<' 00:11:46.168 07:26:55 -- scripts/common.sh@339 -- # ver1_l=2 00:11:46.168 07:26:55 -- scripts/common.sh@340 -- # ver2_l=1 00:11:46.168 07:26:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:46.168 07:26:55 -- scripts/common.sh@343 -- # case "$op" in 00:11:46.168 07:26:55 -- scripts/common.sh@344 -- # : 1 00:11:46.168 07:26:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:46.168 07:26:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.168 07:26:55 -- scripts/common.sh@364 -- # decimal 1 00:11:46.168 07:26:55 -- scripts/common.sh@352 -- # local d=1 00:11:46.168 07:26:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.168 07:26:55 -- scripts/common.sh@354 -- # echo 1 00:11:46.168 07:26:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:46.168 07:26:55 -- scripts/common.sh@365 -- # decimal 2 00:11:46.168 07:26:55 -- scripts/common.sh@352 -- # local d=2 00:11:46.168 07:26:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.168 07:26:55 -- scripts/common.sh@354 -- # echo 2 00:11:46.168 07:26:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:46.168 07:26:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:46.168 07:26:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:46.168 07:26:55 -- scripts/common.sh@367 -- # return 0 00:11:46.168 07:26:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:46.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.168 --rc genhtml_branch_coverage=1 00:11:46.168 --rc genhtml_function_coverage=1 00:11:46.168 --rc genhtml_legend=1 00:11:46.168 --rc geninfo_all_blocks=1 00:11:46.168 --rc geninfo_unexecuted_blocks=1 00:11:46.168 00:11:46.168 ' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:46.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.168 --rc genhtml_branch_coverage=1 00:11:46.168 --rc genhtml_function_coverage=1 00:11:46.168 --rc genhtml_legend=1 00:11:46.168 --rc geninfo_all_blocks=1 00:11:46.168 --rc geninfo_unexecuted_blocks=1 00:11:46.168 00:11:46.168 ' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:46.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.168 --rc genhtml_branch_coverage=1 00:11:46.168 --rc genhtml_function_coverage=1 00:11:46.168 --rc genhtml_legend=1 00:11:46.168 --rc geninfo_all_blocks=1 00:11:46.168 --rc geninfo_unexecuted_blocks=1 00:11:46.168 00:11:46.168 ' 00:11:46.168 07:26:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:46.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.168 --rc genhtml_branch_coverage=1 00:11:46.168 --rc genhtml_function_coverage=1 00:11:46.168 --rc genhtml_legend=1 00:11:46.168 --rc geninfo_all_blocks=1 00:11:46.168 --rc geninfo_unexecuted_blocks=1 00:11:46.168 00:11:46.168 ' 00:11:46.168 07:26:55 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.168 07:26:55 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:46.168 07:26:55 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:46.168 07:26:55 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:46.168 07:26:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:46.168 07:26:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:46.168 07:26:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:46.168 07:26:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:46.168 07:26:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:46.168 07:26:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:46.168 07:26:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:46.426 07:26:55 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:46.426 07:26:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:46.426 07:26:55 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:46.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.426 07:26:55 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:46.426 07:26:55 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66676 00:11:46.426 07:26:55 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:46.426 07:26:55 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:46.426 07:26:55 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66676 00:11:46.426 07:26:55 -- common/autotest_common.sh@829 -- # '[' -z 66676 ']' 00:11:46.426 07:26:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.426 07:26:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.426 07:26:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.426 07:26:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.426 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:46.426 [2024-11-19 07:26:55.503368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:46.426 [2024-11-19 07:26:55.503597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66676 ] 00:11:46.426 [2024-11-19 07:26:55.653635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:46.684 [2024-11-19 07:26:55.828119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.684 [2024-11-19 07:26:55.828730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.684 [2024-11-19 07:26:55.828813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.055 07:26:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.055 07:26:56 -- common/autotest_common.sh@862 -- # return 0 00:11:48.055 07:26:56 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:48.055 Nvme0n1 00:11:48.055 07:26:57 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:48.055 07:26:57 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:48.312 request: 00:11:48.312 { 00:11:48.312 "filename": "non_existing_file", 00:11:48.312 "bdev_name": "Nvme0n1", 00:11:48.312 "method": "bdev_nvme_apply_firmware", 00:11:48.312 "req_id": 1 00:11:48.312 } 00:11:48.312 Got JSON-RPC error response 00:11:48.312 response: 00:11:48.312 { 00:11:48.312 "code": -32603, 00:11:48.312 "message": "open file failed." 00:11:48.312 } 00:11:48.312 07:26:57 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:48.312 07:26:57 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:48.312 07:26:57 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:48.570 07:26:57 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:48.570 07:26:57 -- nvme/nvme_rpc.sh@40 -- # killprocess 66676 00:11:48.570 07:26:57 -- common/autotest_common.sh@936 -- # '[' -z 66676 ']' 00:11:48.570 07:26:57 -- common/autotest_common.sh@940 -- # kill -0 66676 00:11:48.570 07:26:57 -- common/autotest_common.sh@941 -- # uname 00:11:48.570 07:26:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.570 07:26:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66676 00:11:48.570 killing process with pid 66676 00:11:48.570 07:26:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:48.570 07:26:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:48.570 07:26:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66676' 00:11:48.570 07:26:57 -- common/autotest_common.sh@955 -- # kill 66676 00:11:48.570 07:26:57 -- common/autotest_common.sh@960 -- # wait 66676 00:11:49.940 ************************************ 00:11:49.941 END TEST nvme_rpc 00:11:49.941 ************************************ 00:11:49.941 00:11:49.941 real 0m3.525s 00:11:49.941 user 0m6.724s 00:11:49.941 sys 0m0.515s 00:11:49.941 07:26:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:49.941 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:11:49.941 07:26:58 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:49.941 07:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.941 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:11:49.941 ************************************ 00:11:49.941 START TEST nvme_rpc_timeouts 00:11:49.941 ************************************ 00:11:49.941 07:26:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:49.941 * Looking for test storage... 00:11:49.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:49.941 07:26:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:49.941 07:26:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:49.941 07:26:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:49.941 07:26:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:49.941 07:26:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:49.941 07:26:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:49.941 07:26:58 -- scripts/common.sh@335 -- # IFS=.-: 00:11:49.941 07:26:58 -- scripts/common.sh@335 -- # read -ra ver1 00:11:49.941 07:26:58 -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.941 07:26:58 -- scripts/common.sh@336 -- # read -ra ver2 00:11:49.941 07:26:58 -- scripts/common.sh@337 -- # local 'op=<' 00:11:49.941 07:26:58 -- scripts/common.sh@339 -- # ver1_l=2 00:11:49.941 07:26:58 -- scripts/common.sh@340 -- # ver2_l=1 00:11:49.941 07:26:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:49.941 07:26:58 -- scripts/common.sh@343 -- # case "$op" in 00:11:49.941 07:26:58 -- scripts/common.sh@344 -- # : 1 00:11:49.941 07:26:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:49.941 07:26:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.941 07:26:58 -- scripts/common.sh@364 -- # decimal 1 00:11:49.941 07:26:58 -- scripts/common.sh@352 -- # local d=1 00:11:49.941 07:26:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.941 07:26:58 -- scripts/common.sh@354 -- # echo 1 00:11:49.941 07:26:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:49.941 07:26:58 -- scripts/common.sh@365 -- # decimal 2 00:11:49.941 07:26:58 -- scripts/common.sh@352 -- # local d=2 00:11:49.941 07:26:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.941 07:26:58 -- scripts/common.sh@354 -- # echo 2 00:11:49.941 07:26:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:49.941 07:26:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:49.941 07:26:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:49.941 07:26:58 -- scripts/common.sh@367 -- # return 0 00:11:49.941 07:26:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:49.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.941 --rc genhtml_branch_coverage=1 00:11:49.941 --rc genhtml_function_coverage=1 00:11:49.941 --rc genhtml_legend=1 00:11:49.941 --rc geninfo_all_blocks=1 00:11:49.941 --rc geninfo_unexecuted_blocks=1 00:11:49.941 00:11:49.941 ' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:49.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.941 --rc genhtml_branch_coverage=1 00:11:49.941 --rc genhtml_function_coverage=1 00:11:49.941 --rc genhtml_legend=1 00:11:49.941 --rc geninfo_all_blocks=1 00:11:49.941 --rc geninfo_unexecuted_blocks=1 00:11:49.941 00:11:49.941 ' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:49.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.941 --rc genhtml_branch_coverage=1 00:11:49.941 --rc genhtml_function_coverage=1 00:11:49.941 --rc genhtml_legend=1 00:11:49.941 --rc geninfo_all_blocks=1 00:11:49.941 --rc geninfo_unexecuted_blocks=1 00:11:49.941 00:11:49.941 ' 00:11:49.941 07:26:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:49.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.941 --rc genhtml_branch_coverage=1 00:11:49.941 --rc genhtml_function_coverage=1 00:11:49.941 --rc genhtml_legend=1 00:11:49.941 --rc geninfo_all_blocks=1 00:11:49.941 --rc geninfo_unexecuted_blocks=1 00:11:49.941 00:11:49.941 ' 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66756 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66756 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66787 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66787 00:11:49.941 07:26:58 -- common/autotest_common.sh@829 -- # '[' -z 66787 ']' 00:11:49.941 07:26:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.941 07:26:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.941 07:26:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.941 07:26:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.941 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:11:49.941 07:26:58 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:49.941 [2024-11-19 07:26:59.020830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:49.941 [2024-11-19 07:26:59.020933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66787 ] 00:11:49.941 [2024-11-19 07:26:59.169454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.206 [2024-11-19 07:26:59.344598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:50.206 [2024-11-19 07:26:59.345169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.206 [2024-11-19 07:26:59.345214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.581 07:27:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.581 Checking default timeout settings: 00:11:51.581 07:27:00 -- common/autotest_common.sh@862 -- # return 0 00:11:51.581 07:27:00 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:51.581 07:27:00 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:51.581 Making settings changes with rpc: 00:11:51.581 07:27:00 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:51.581 07:27:00 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:51.839 Check default vs. modified settings: 00:11:51.839 07:27:00 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:51.840 07:27:00 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:52.098 Setting action_on_timeout is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:52.098 Setting timeout_us is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:52.098 Setting timeout_admin_us is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66756 /tmp/settings_modified_66756 00:11:52.098 07:27:01 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66787 00:11:52.098 07:27:01 -- common/autotest_common.sh@936 -- # '[' -z 66787 ']' 00:11:52.098 07:27:01 -- common/autotest_common.sh@940 -- # kill -0 66787 00:11:52.098 07:27:01 -- common/autotest_common.sh@941 -- # uname 00:11:52.098 07:27:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.098 07:27:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66787 00:11:52.098 killing process with pid 66787 00:11:52.098 07:27:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.098 07:27:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.098 07:27:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66787' 00:11:52.098 07:27:01 -- common/autotest_common.sh@955 -- # kill 66787 00:11:52.098 07:27:01 -- common/autotest_common.sh@960 -- # wait 66787 00:11:53.472 RPC TIMEOUT SETTING TEST PASSED. 00:11:53.472 07:27:02 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:53.472 ************************************ 00:11:53.472 END TEST nvme_rpc_timeouts 00:11:53.472 ************************************ 00:11:53.472 00:11:53.472 real 0m3.802s 00:11:53.472 user 0m7.364s 00:11:53.472 sys 0m0.494s 00:11:53.472 07:27:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:53.472 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.472 07:27:02 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:53.472 07:27:02 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:53.472 07:27:02 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:53.472 07:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:53.472 07:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.472 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.472 ************************************ 00:11:53.472 START TEST nvme_xnvme 00:11:53.472 ************************************ 00:11:53.472 07:27:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:53.732 * Looking for test storage... 00:11:53.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:53.732 07:27:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:53.732 07:27:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:53.732 07:27:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:53.732 07:27:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:53.732 07:27:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:53.732 07:27:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:53.732 07:27:02 -- scripts/common.sh@335 -- # IFS=.-: 00:11:53.732 07:27:02 -- scripts/common.sh@335 -- # read -ra ver1 00:11:53.732 07:27:02 -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.732 07:27:02 -- scripts/common.sh@336 -- # read -ra ver2 00:11:53.732 07:27:02 -- scripts/common.sh@337 -- # local 'op=<' 00:11:53.732 07:27:02 -- scripts/common.sh@339 -- # ver1_l=2 00:11:53.732 07:27:02 -- scripts/common.sh@340 -- # ver2_l=1 00:11:53.732 07:27:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:53.732 07:27:02 -- scripts/common.sh@343 -- # case "$op" in 00:11:53.732 07:27:02 -- scripts/common.sh@344 -- # : 1 00:11:53.732 07:27:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:53.732 07:27:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.732 07:27:02 -- scripts/common.sh@364 -- # decimal 1 00:11:53.732 07:27:02 -- scripts/common.sh@352 -- # local d=1 00:11:53.732 07:27:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.732 07:27:02 -- scripts/common.sh@354 -- # echo 1 00:11:53.732 07:27:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:53.732 07:27:02 -- scripts/common.sh@365 -- # decimal 2 00:11:53.732 07:27:02 -- scripts/common.sh@352 -- # local d=2 00:11:53.732 07:27:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.732 07:27:02 -- scripts/common.sh@354 -- # echo 2 00:11:53.732 07:27:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:53.732 07:27:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:53.732 07:27:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:53.732 07:27:02 -- scripts/common.sh@367 -- # return 0 00:11:53.732 07:27:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:53.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.732 --rc genhtml_branch_coverage=1 00:11:53.732 --rc genhtml_function_coverage=1 00:11:53.732 --rc genhtml_legend=1 00:11:53.732 --rc geninfo_all_blocks=1 00:11:53.732 --rc geninfo_unexecuted_blocks=1 00:11:53.732 00:11:53.732 ' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:53.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.732 --rc genhtml_branch_coverage=1 00:11:53.732 --rc genhtml_function_coverage=1 00:11:53.732 --rc genhtml_legend=1 00:11:53.732 --rc geninfo_all_blocks=1 00:11:53.732 --rc geninfo_unexecuted_blocks=1 00:11:53.732 00:11:53.732 ' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:53.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.732 --rc genhtml_branch_coverage=1 00:11:53.732 --rc genhtml_function_coverage=1 00:11:53.732 --rc genhtml_legend=1 00:11:53.732 --rc geninfo_all_blocks=1 00:11:53.732 --rc geninfo_unexecuted_blocks=1 00:11:53.732 00:11:53.732 ' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:53.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.732 --rc genhtml_branch_coverage=1 00:11:53.732 --rc genhtml_function_coverage=1 00:11:53.732 --rc genhtml_legend=1 00:11:53.732 --rc geninfo_all_blocks=1 00:11:53.732 --rc geninfo_unexecuted_blocks=1 00:11:53.732 00:11:53.732 ' 00:11:53.732 07:27:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.732 07:27:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.732 07:27:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.732 07:27:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.732 07:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.732 07:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.732 07:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.732 07:27:02 -- paths/export.sh@5 -- # export PATH 00:11:53.732 07:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:53.732 07:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:53.732 07:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.732 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.732 ************************************ 00:11:53.732 START TEST xnvme_to_malloc_dd_copy 00:11:53.732 ************************************ 00:11:53.732 07:27:02 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:53.732 07:27:02 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:53.732 07:27:02 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:53.732 07:27:02 -- dd/common.sh@191 -- # return 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@18 -- # local io 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:53.732 07:27:02 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:53.732 07:27:02 -- dd/common.sh@31 -- # xtrace_disable 00:11:53.732 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.732 { 00:11:53.732 "subsystems": [ 00:11:53.732 { 00:11:53.732 "subsystem": "bdev", 00:11:53.732 "config": [ 00:11:53.732 { 00:11:53.732 "params": { 00:11:53.732 "block_size": 512, 00:11:53.732 "num_blocks": 2097152, 00:11:53.732 "name": "malloc0" 00:11:53.732 }, 00:11:53.732 "method": "bdev_malloc_create" 00:11:53.732 }, 00:11:53.732 { 00:11:53.732 "params": { 00:11:53.732 "io_mechanism": "libaio", 00:11:53.732 "filename": "/dev/nullb0", 00:11:53.732 "name": "null0" 00:11:53.732 }, 00:11:53.733 "method": "bdev_xnvme_create" 00:11:53.733 }, 00:11:53.733 { 00:11:53.733 "method": "bdev_wait_for_examine" 00:11:53.733 } 00:11:53.733 ] 00:11:53.733 } 00:11:53.733 ] 00:11:53.733 } 00:11:53.733 [2024-11-19 07:27:02.911233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:53.733 [2024-11-19 07:27:02.911407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66918 ] 00:11:53.990 [2024-11-19 07:27:03.060023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.990 [2024-11-19 07:27:03.231899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.529  [2024-11-19T07:27:06.362Z] Copying: 235/1024 [MB] (235 MBps) [2024-11-19T07:27:07.324Z] Copying: 487/1024 [MB] (252 MBps) [2024-11-19T07:27:08.259Z] Copying: 798/1024 [MB] (310 MBps) [2024-11-19T07:27:10.161Z] Copying: 1024/1024 [MB] (average 275 MBps) 00:12:00.911 00:12:00.911 07:27:09 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:00.911 07:27:09 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:00.911 07:27:09 -- dd/common.sh@31 -- # xtrace_disable 00:12:00.911 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:12:00.911 { 00:12:00.911 "subsystems": [ 00:12:00.911 { 00:12:00.911 "subsystem": "bdev", 00:12:00.911 "config": [ 00:12:00.911 { 00:12:00.911 "params": { 00:12:00.911 "block_size": 512, 00:12:00.911 "num_blocks": 2097152, 00:12:00.911 "name": "malloc0" 00:12:00.911 }, 00:12:00.911 "method": "bdev_malloc_create" 00:12:00.911 }, 00:12:00.911 { 00:12:00.911 "params": { 00:12:00.911 "io_mechanism": "libaio", 00:12:00.911 "filename": "/dev/nullb0", 00:12:00.911 "name": "null0" 00:12:00.911 }, 00:12:00.911 "method": "bdev_xnvme_create" 00:12:00.911 }, 00:12:00.911 { 00:12:00.911 "method": "bdev_wait_for_examine" 00:12:00.911 } 00:12:00.911 ] 00:12:00.911 } 00:12:00.911 ] 00:12:00.911 } 00:12:00.911 [2024-11-19 07:27:09.956507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:00.911 [2024-11-19 07:27:09.956615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67005 ] 00:12:00.911 [2024-11-19 07:27:10.102779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.170 [2024-11-19 07:27:10.241172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.071  [2024-11-19T07:27:13.266Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-19T07:27:14.203Z] Copying: 624/1024 [MB] (312 MBps) [2024-11-19T07:27:14.461Z] Copying: 936/1024 [MB] (311 MBps) [2024-11-19T07:27:16.362Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:07.112 00:12:07.112 07:27:16 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:07.112 07:27:16 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:07.112 07:27:16 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:07.112 07:27:16 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:07.112 07:27:16 -- dd/common.sh@31 -- # xtrace_disable 00:12:07.112 07:27:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.112 { 00:12:07.112 "subsystems": [ 00:12:07.112 { 00:12:07.112 "subsystem": "bdev", 00:12:07.112 "config": [ 00:12:07.112 { 00:12:07.112 "params": { 00:12:07.112 "block_size": 512, 00:12:07.112 "num_blocks": 2097152, 00:12:07.112 "name": "malloc0" 00:12:07.112 }, 00:12:07.112 "method": "bdev_malloc_create" 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "params": { 00:12:07.112 "io_mechanism": "io_uring", 00:12:07.112 "filename": "/dev/nullb0", 00:12:07.112 "name": "null0" 00:12:07.112 }, 00:12:07.112 "method": "bdev_xnvme_create" 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "method": "bdev_wait_for_examine" 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 } 00:12:07.112 [2024-11-19 07:27:16.315733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:07.112 [2024-11-19 07:27:16.315847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67087 ] 00:12:07.370 [2024-11-19 07:27:16.464746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.370 [2024-11-19 07:27:16.610804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.273  [2024-11-19T07:27:19.457Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-19T07:27:20.393Z] Copying: 643/1024 [MB] (321 MBps) [2024-11-19T07:27:20.652Z] Copying: 964/1024 [MB] (321 MBps) [2024-11-19T07:27:22.579Z] Copying: 1024/1024 [MB] (average 321 MBps) 00:12:13.329 00:12:13.329 07:27:22 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:13.329 07:27:22 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:13.329 07:27:22 -- dd/common.sh@31 -- # xtrace_disable 00:12:13.329 07:27:22 -- common/autotest_common.sh@10 -- # set +x 00:12:13.329 { 00:12:13.329 "subsystems": [ 00:12:13.329 { 00:12:13.329 "subsystem": "bdev", 00:12:13.329 "config": [ 00:12:13.329 { 00:12:13.329 "params": { 00:12:13.329 "block_size": 512, 00:12:13.329 "num_blocks": 2097152, 00:12:13.329 "name": "malloc0" 00:12:13.329 }, 00:12:13.329 "method": "bdev_malloc_create" 00:12:13.329 }, 00:12:13.329 { 00:12:13.329 "params": { 00:12:13.329 "io_mechanism": "io_uring", 00:12:13.329 "filename": "/dev/nullb0", 00:12:13.329 "name": "null0" 00:12:13.329 }, 00:12:13.329 "method": "bdev_xnvme_create" 00:12:13.329 }, 00:12:13.329 { 00:12:13.329 "method": "bdev_wait_for_examine" 00:12:13.329 } 00:12:13.329 ] 00:12:13.329 } 00:12:13.329 ] 00:12:13.329 } 00:12:13.587 [2024-11-19 07:27:22.587368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:13.587 [2024-11-19 07:27:22.587484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67163 ] 00:12:13.587 [2024-11-19 07:27:22.734143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.846 [2024-11-19 07:27:22.870539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.747  [2024-11-19T07:27:25.931Z] Copying: 325/1024 [MB] (325 MBps) [2024-11-19T07:27:26.865Z] Copying: 651/1024 [MB] (325 MBps) [2024-11-19T07:27:26.865Z] Copying: 977/1024 [MB] (325 MBps) [2024-11-19T07:27:28.770Z] Copying: 1024/1024 [MB] (average 325 MBps) 00:12:19.520 00:12:19.520 07:27:28 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:19.520 07:27:28 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:19.520 00:12:19.520 real 0m25.933s 00:12:19.520 user 0m22.875s 00:12:19.520 sys 0m2.510s 00:12:19.520 ************************************ 00:12:19.521 END TEST xnvme_to_malloc_dd_copy 00:12:19.521 ************************************ 00:12:19.521 07:27:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:19.521 07:27:28 -- common/autotest_common.sh@10 -- # set +x 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:19.781 07:27:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:19.781 07:27:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.781 07:27:28 -- common/autotest_common.sh@10 -- # set +x 00:12:19.781 ************************************ 00:12:19.781 START TEST xnvme_bdevperf 00:12:19.781 ************************************ 00:12:19.781 07:27:28 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:19.781 07:27:28 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:19.781 07:27:28 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:19.781 07:27:28 -- dd/common.sh@191 -- # return 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@60 -- # local io 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:19.781 07:27:28 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:19.781 07:27:28 -- dd/common.sh@31 -- # xtrace_disable 00:12:19.781 07:27:28 -- common/autotest_common.sh@10 -- # set +x 00:12:19.781 { 00:12:19.781 "subsystems": [ 00:12:19.781 { 00:12:19.781 "subsystem": "bdev", 00:12:19.781 "config": [ 00:12:19.781 { 00:12:19.781 "params": { 00:12:19.781 "io_mechanism": "libaio", 00:12:19.781 "filename": "/dev/nullb0", 00:12:19.781 "name": "null0" 00:12:19.781 }, 00:12:19.781 "method": "bdev_xnvme_create" 00:12:19.781 }, 00:12:19.781 { 00:12:19.781 "method": "bdev_wait_for_examine" 00:12:19.781 } 00:12:19.781 ] 00:12:19.781 } 00:12:19.781 ] 00:12:19.781 } 00:12:19.781 [2024-11-19 07:27:28.894915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.781 [2024-11-19 07:27:28.895025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67262 ] 00:12:20.042 [2024-11-19 07:27:29.045661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.042 [2024-11-19 07:27:29.206987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.302 Running I/O for 5 seconds... 00:12:25.574 00:12:25.574 Latency(us) 00:12:25.574 [2024-11-19T07:27:34.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.575 [2024-11-19T07:27:34.825Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:25.575 null0 : 5.00 208602.79 814.85 0.00 0.00 304.56 113.43 718.38 00:12:25.575 [2024-11-19T07:27:34.825Z] =================================================================================================================== 00:12:25.575 [2024-11-19T07:27:34.825Z] Total : 208602.79 814.85 0.00 0.00 304.56 113.43 718.38 00:12:25.836 07:27:35 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:25.836 07:27:35 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:25.836 07:27:35 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:25.836 07:27:35 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:25.836 07:27:35 -- dd/common.sh@31 -- # xtrace_disable 00:12:25.836 07:27:35 -- common/autotest_common.sh@10 -- # set +x 00:12:25.836 { 00:12:25.836 "subsystems": [ 00:12:25.836 { 00:12:25.836 "subsystem": "bdev", 00:12:25.836 "config": [ 00:12:25.836 { 00:12:25.836 "params": { 00:12:25.836 "io_mechanism": "io_uring", 00:12:25.836 "filename": "/dev/nullb0", 00:12:25.836 "name": "null0" 00:12:25.836 }, 00:12:25.836 "method": "bdev_xnvme_create" 00:12:25.836 }, 00:12:25.836 { 00:12:25.836 "method": "bdev_wait_for_examine" 00:12:25.836 } 00:12:25.836 ] 00:12:25.836 } 00:12:25.836 ] 00:12:25.836 } 00:12:26.096 [2024-11-19 07:27:35.111872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.096 [2024-11-19 07:27:35.111982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67331 ] 00:12:26.096 [2024-11-19 07:27:35.260349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.357 [2024-11-19 07:27:35.438991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.617 Running I/O for 5 seconds... 00:12:31.895 00:12:31.895 Latency(us) 00:12:31.895 [2024-11-19T07:27:41.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.895 [2024-11-19T07:27:41.145Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:31.895 null0 : 5.00 232810.35 909.42 0.00 0.00 272.60 152.81 970.44 00:12:31.895 [2024-11-19T07:27:41.145Z] =================================================================================================================== 00:12:31.895 [2024-11-19T07:27:41.145Z] Total : 232810.35 909.42 0.00 0.00 272.60 152.81 970.44 00:12:32.154 07:27:41 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:32.154 07:27:41 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:32.154 00:12:32.154 real 0m12.506s 00:12:32.154 user 0m10.057s 00:12:32.154 sys 0m2.209s 00:12:32.154 07:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.154 ************************************ 00:12:32.154 END TEST xnvme_bdevperf 00:12:32.154 ************************************ 00:12:32.154 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.154 00:12:32.154 real 0m38.690s 00:12:32.154 user 0m33.048s 00:12:32.154 sys 0m4.825s 00:12:32.154 07:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.154 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.154 ************************************ 00:12:32.154 END TEST nvme_xnvme 00:12:32.154 ************************************ 00:12:32.154 07:27:41 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:32.154 07:27:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.154 07:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.154 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.415 ************************************ 00:12:32.415 START TEST blockdev_xnvme 00:12:32.415 ************************************ 00:12:32.415 07:27:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:32.415 * Looking for test storage... 00:12:32.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:32.415 07:27:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:32.415 07:27:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:32.415 07:27:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:32.415 07:27:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:32.415 07:27:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:32.415 07:27:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:32.415 07:27:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:32.415 07:27:41 -- scripts/common.sh@335 -- # IFS=.-: 00:12:32.415 07:27:41 -- scripts/common.sh@335 -- # read -ra ver1 00:12:32.415 07:27:41 -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.415 07:27:41 -- scripts/common.sh@336 -- # read -ra ver2 00:12:32.415 07:27:41 -- scripts/common.sh@337 -- # local 'op=<' 00:12:32.415 07:27:41 -- scripts/common.sh@339 -- # ver1_l=2 00:12:32.415 07:27:41 -- scripts/common.sh@340 -- # ver2_l=1 00:12:32.415 07:27:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:32.415 07:27:41 -- scripts/common.sh@343 -- # case "$op" in 00:12:32.415 07:27:41 -- scripts/common.sh@344 -- # : 1 00:12:32.415 07:27:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:32.415 07:27:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.415 07:27:41 -- scripts/common.sh@364 -- # decimal 1 00:12:32.415 07:27:41 -- scripts/common.sh@352 -- # local d=1 00:12:32.415 07:27:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.415 07:27:41 -- scripts/common.sh@354 -- # echo 1 00:12:32.415 07:27:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:32.415 07:27:41 -- scripts/common.sh@365 -- # decimal 2 00:12:32.415 07:27:41 -- scripts/common.sh@352 -- # local d=2 00:12:32.415 07:27:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.415 07:27:41 -- scripts/common.sh@354 -- # echo 2 00:12:32.415 07:27:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:32.415 07:27:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:32.415 07:27:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:32.415 07:27:41 -- scripts/common.sh@367 -- # return 0 00:12:32.415 07:27:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.415 07:27:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.415 --rc genhtml_branch_coverage=1 00:12:32.415 --rc genhtml_function_coverage=1 00:12:32.415 --rc genhtml_legend=1 00:12:32.415 --rc geninfo_all_blocks=1 00:12:32.415 --rc geninfo_unexecuted_blocks=1 00:12:32.415 00:12:32.415 ' 00:12:32.415 07:27:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.415 --rc genhtml_branch_coverage=1 00:12:32.415 --rc genhtml_function_coverage=1 00:12:32.415 --rc genhtml_legend=1 00:12:32.415 --rc geninfo_all_blocks=1 00:12:32.415 --rc geninfo_unexecuted_blocks=1 00:12:32.415 00:12:32.415 ' 00:12:32.415 07:27:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.415 --rc genhtml_branch_coverage=1 00:12:32.415 --rc genhtml_function_coverage=1 00:12:32.415 --rc genhtml_legend=1 00:12:32.415 --rc geninfo_all_blocks=1 00:12:32.415 --rc geninfo_unexecuted_blocks=1 00:12:32.415 00:12:32.416 ' 00:12:32.416 07:27:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:32.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.416 --rc genhtml_branch_coverage=1 00:12:32.416 --rc genhtml_function_coverage=1 00:12:32.416 --rc genhtml_legend=1 00:12:32.416 --rc geninfo_all_blocks=1 00:12:32.416 --rc geninfo_unexecuted_blocks=1 00:12:32.416 00:12:32.416 ' 00:12:32.416 07:27:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:32.416 07:27:41 -- bdev/nbd_common.sh@6 -- # set -e 00:12:32.416 07:27:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:32.416 07:27:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:32.416 07:27:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:32.416 07:27:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:32.416 07:27:41 -- bdev/blockdev.sh@18 -- # : 00:12:32.416 07:27:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:32.416 07:27:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:32.416 07:27:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:32.416 07:27:41 -- bdev/blockdev.sh@672 -- # uname -s 00:12:32.416 07:27:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:32.416 07:27:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:32.416 07:27:41 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:32.416 07:27:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:32.416 07:27:41 -- bdev/blockdev.sh@682 -- # dek= 00:12:32.416 07:27:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:32.416 07:27:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:32.416 07:27:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:32.416 07:27:41 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:32.416 07:27:41 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:32.416 07:27:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:32.416 07:27:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67472 00:12:32.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.416 07:27:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:32.416 07:27:41 -- bdev/blockdev.sh@47 -- # waitforlisten 67472 00:12:32.416 07:27:41 -- common/autotest_common.sh@829 -- # '[' -z 67472 ']' 00:12:32.416 07:27:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.416 07:27:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.416 07:27:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.416 07:27:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.416 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.416 07:27:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:32.416 [2024-11-19 07:27:41.638280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.416 [2024-11-19 07:27:41.638430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67472 ] 00:12:32.676 [2024-11-19 07:27:41.801751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.936 [2024-11-19 07:27:41.946482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.936 [2024-11-19 07:27:41.946640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.194 07:27:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.194 07:27:42 -- common/autotest_common.sh@862 -- # return 0 00:12:33.194 07:27:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:33.194 07:27:42 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:33.194 07:27:42 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:33.194 07:27:42 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:33.194 07:27:42 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:33.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:33.762 Waiting for block devices as requested 00:12:33.762 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.762 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.762 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:34.019 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:39.300 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:39.300 07:27:48 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:39.300 07:27:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:39.300 07:27:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:39.300 07:27:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:39.300 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.300 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:39.300 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:39.300 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:39.300 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.300 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.300 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:39.300 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:39.300 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.301 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.301 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:39.301 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:39.301 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.301 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:39.301 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:39.301 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.301 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:39.301 07:27:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:39.301 07:27:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:39.301 07:27:48 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:39.301 nvme0n1 00:12:39.301 nvme1n1 00:12:39.301 nvme1n2 00:12:39.301 nvme1n3 00:12:39.301 nvme2n1 00:12:39.301 nvme3n1 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@738 -- # cat 00:12:39.301 07:27:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:39.301 07:27:48 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:39.301 07:27:48 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:39.301 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.301 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.301 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.301 07:27:48 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:39.301 07:27:48 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3935344e-b0ac-48e3-9734-9e298966ef7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3935344e-b0ac-48e3-9734-9e298966ef7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8f9ef06-c449-4e80-b376-f44fbc5d49c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8f9ef06-c449-4e80-b376-f44fbc5d49c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "abea8d90-468b-40bb-87a1-8e5b0e983bb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "abea8d90-468b-40bb-87a1-8e5b0e983bb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "f18f218f-88da-4f7c-8bdc-c26f5cb8fd31"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f18f218f-88da-4f7c-8bdc-c26f5cb8fd31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15fa82ef-f7ee-4b7f-9431-9a36b17ead0b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "15fa82ef-f7ee-4b7f-9431-9a36b17ead0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ea9fe6b3-9b88-4113-969f-35a69391fe19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ea9fe6b3-9b88-4113-969f-35a69391fe19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:39.301 07:27:48 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:39.301 07:27:48 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:39.301 07:27:48 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:39.301 07:27:48 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:39.301 07:27:48 -- bdev/blockdev.sh@752 -- # killprocess 67472 00:12:39.301 07:27:48 -- common/autotest_common.sh@936 -- # '[' -z 67472 ']' 00:12:39.301 07:27:48 -- common/autotest_common.sh@940 -- # kill -0 67472 00:12:39.301 07:27:48 -- common/autotest_common.sh@941 -- # uname 00:12:39.301 07:27:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.301 07:27:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67472 00:12:39.301 07:27:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.301 killing process with pid 67472 00:12:39.301 07:27:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.301 07:27:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67472' 00:12:39.301 07:27:48 -- common/autotest_common.sh@955 -- # kill 67472 00:12:39.301 07:27:48 -- common/autotest_common.sh@960 -- # wait 67472 00:12:40.244 07:27:49 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:40.244 07:27:49 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:40.244 07:27:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:40.244 07:27:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.244 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.505 ************************************ 00:12:40.505 START TEST bdev_hello_world 00:12:40.505 ************************************ 00:12:40.505 07:27:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:40.505 [2024-11-19 07:27:49.580115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:40.505 [2024-11-19 07:27:49.580257] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67852 ] 00:12:40.505 [2024-11-19 07:27:49.729431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.767 [2024-11-19 07:27:49.878383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.027 [2024-11-19 07:27:50.163522] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:41.027 [2024-11-19 07:27:50.163566] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:41.027 [2024-11-19 07:27:50.163578] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:41.027 [2024-11-19 07:27:50.164978] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:41.027 [2024-11-19 07:27:50.165314] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:41.027 [2024-11-19 07:27:50.165337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:41.027 [2024-11-19 07:27:50.165517] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:41.027 00:12:41.027 [2024-11-19 07:27:50.165537] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:41.598 00:12:41.598 real 0m1.270s 00:12:41.598 user 0m1.005s 00:12:41.598 sys 0m0.155s 00:12:41.598 07:27:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.598 ************************************ 00:12:41.598 END TEST bdev_hello_world 00:12:41.598 ************************************ 00:12:41.598 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.598 07:27:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:41.598 07:27:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.598 07:27:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.598 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.598 ************************************ 00:12:41.598 START TEST bdev_bounds 00:12:41.598 ************************************ 00:12:41.598 07:27:50 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:41.598 07:27:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=67889 00:12:41.598 07:27:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:41.598 Process bdevio pid: 67889 00:12:41.598 07:27:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67889' 00:12:41.598 07:27:50 -- bdev/blockdev.sh@291 -- # waitforlisten 67889 00:12:41.598 07:27:50 -- common/autotest_common.sh@829 -- # '[' -z 67889 ']' 00:12:41.598 07:27:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:41.598 07:27:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.598 07:27:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.599 07:27:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.599 07:27:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.599 07:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.859 [2024-11-19 07:27:50.892493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.859 [2024-11-19 07:27:50.892613] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67889 ] 00:12:41.859 [2024-11-19 07:27:51.038601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:42.120 [2024-11-19 07:27:51.180170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.120 [2024-11-19 07:27:51.180361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.120 [2024-11-19 07:27:51.180486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.728 07:27:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.728 07:27:51 -- common/autotest_common.sh@862 -- # return 0 00:12:42.728 07:27:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:42.728 I/O targets: 00:12:42.728 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:42.728 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.728 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.728 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.728 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:42.728 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:42.728 00:12:42.728 00:12:42.728 CUnit - A unit testing framework for C - Version 2.1-3 00:12:42.728 http://cunit.sourceforge.net/ 00:12:42.728 00:12:42.728 00:12:42.728 Suite: bdevio tests on: nvme3n1 00:12:42.728 Test: blockdev write read block ...passed 00:12:42.728 Test: blockdev write zeroes read block ...passed 00:12:42.728 Test: blockdev write zeroes read no split ...passed 00:12:42.728 Test: blockdev write zeroes read split ...passed 00:12:42.728 Test: blockdev write zeroes read split partial ...passed 00:12:42.728 Test: blockdev reset ...passed 00:12:42.728 Test: blockdev write read 8 blocks ...passed 00:12:42.728 Test: blockdev write read size > 128k ...passed 00:12:42.728 Test: blockdev write read invalid size ...passed 00:12:42.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.728 Test: blockdev write read max offset ...passed 00:12:42.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.728 Test: blockdev writev readv 8 blocks ...passed 00:12:42.728 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.728 Test: blockdev writev readv block ...passed 00:12:42.728 Test: blockdev writev readv size > 128k ...passed 00:12:42.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.728 Test: blockdev comparev and writev ...passed 00:12:42.728 Test: blockdev nvme passthru rw ...passed 00:12:42.728 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.728 Test: blockdev nvme admin passthru ...passed 00:12:42.728 Test: blockdev copy ...passed 00:12:42.728 Suite: bdevio tests on: nvme2n1 00:12:42.728 Test: blockdev write read block ...passed 00:12:42.728 Test: blockdev write zeroes read block ...passed 00:12:42.728 Test: blockdev write zeroes read no split ...passed 00:12:42.728 Test: blockdev write zeroes read split ...passed 00:12:42.728 Test: blockdev write zeroes read split partial ...passed 00:12:42.728 Test: blockdev reset ...passed 00:12:42.728 Test: blockdev write read 8 blocks ...passed 00:12:42.728 Test: blockdev write read size > 128k ...passed 00:12:42.728 Test: blockdev write read invalid size ...passed 00:12:42.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.728 Test: blockdev write read max offset ...passed 00:12:42.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.728 Test: blockdev writev readv 8 blocks ...passed 00:12:42.728 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.728 Test: blockdev writev readv block ...passed 00:12:42.728 Test: blockdev writev readv size > 128k ...passed 00:12:42.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.728 Test: blockdev comparev and writev ...passed 00:12:42.728 Test: blockdev nvme passthru rw ...passed 00:12:42.728 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.728 Test: blockdev nvme admin passthru ...passed 00:12:42.728 Test: blockdev copy ...passed 00:12:42.728 Suite: bdevio tests on: nvme1n3 00:12:42.728 Test: blockdev write read block ...passed 00:12:42.728 Test: blockdev write zeroes read block ...passed 00:12:42.728 Test: blockdev write zeroes read no split ...passed 00:12:42.728 Test: blockdev write zeroes read split ...passed 00:12:42.728 Test: blockdev write zeroes read split partial ...passed 00:12:42.728 Test: blockdev reset ...passed 00:12:42.728 Test: blockdev write read 8 blocks ...passed 00:12:42.728 Test: blockdev write read size > 128k ...passed 00:12:42.728 Test: blockdev write read invalid size ...passed 00:12:42.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.728 Test: blockdev write read max offset ...passed 00:12:42.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.728 Test: blockdev writev readv 8 blocks ...passed 00:12:42.728 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.728 Test: blockdev writev readv block ...passed 00:12:42.729 Test: blockdev writev readv size > 128k ...passed 00:12:42.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.729 Test: blockdev comparev and writev ...passed 00:12:42.729 Test: blockdev nvme passthru rw ...passed 00:12:42.729 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.729 Test: blockdev nvme admin passthru ...passed 00:12:42.729 Test: blockdev copy ...passed 00:12:42.729 Suite: bdevio tests on: nvme1n2 00:12:42.729 Test: blockdev write read block ...passed 00:12:42.729 Test: blockdev write zeroes read block ...passed 00:12:42.729 Test: blockdev write zeroes read no split ...passed 00:12:42.996 Test: blockdev write zeroes read split ...passed 00:12:42.996 Test: blockdev write zeroes read split partial ...passed 00:12:42.996 Test: blockdev reset ...passed 00:12:42.996 Test: blockdev write read 8 blocks ...passed 00:12:42.996 Test: blockdev write read size > 128k ...passed 00:12:42.996 Test: blockdev write read invalid size ...passed 00:12:42.996 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.996 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.996 Test: blockdev write read max offset ...passed 00:12:42.996 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.996 Test: blockdev writev readv 8 blocks ...passed 00:12:42.996 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.996 Test: blockdev writev readv block ...passed 00:12:42.996 Test: blockdev writev readv size > 128k ...passed 00:12:42.996 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.996 Test: blockdev comparev and writev ...passed 00:12:42.996 Test: blockdev nvme passthru rw ...passed 00:12:42.996 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.996 Test: blockdev nvme admin passthru ...passed 00:12:42.996 Test: blockdev copy ...passed 00:12:42.996 Suite: bdevio tests on: nvme1n1 00:12:42.996 Test: blockdev write read block ...passed 00:12:42.996 Test: blockdev write zeroes read block ...passed 00:12:42.996 Test: blockdev write zeroes read no split ...passed 00:12:42.996 Test: blockdev write zeroes read split ...passed 00:12:42.996 Test: blockdev write zeroes read split partial ...passed 00:12:42.996 Test: blockdev reset ...passed 00:12:42.996 Test: blockdev write read 8 blocks ...passed 00:12:42.996 Test: blockdev write read size > 128k ...passed 00:12:42.996 Test: blockdev write read invalid size ...passed 00:12:42.996 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.996 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.996 Test: blockdev write read max offset ...passed 00:12:42.996 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.996 Test: blockdev writev readv 8 blocks ...passed 00:12:42.996 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.996 Test: blockdev writev readv block ...passed 00:12:42.996 Test: blockdev writev readv size > 128k ...passed 00:12:42.996 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.996 Test: blockdev comparev and writev ...passed 00:12:42.996 Test: blockdev nvme passthru rw ...passed 00:12:42.996 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.996 Test: blockdev nvme admin passthru ...passed 00:12:42.996 Test: blockdev copy ...passed 00:12:42.996 Suite: bdevio tests on: nvme0n1 00:12:42.996 Test: blockdev write read block ...passed 00:12:42.996 Test: blockdev write zeroes read block ...passed 00:12:42.996 Test: blockdev write zeroes read no split ...passed 00:12:42.996 Test: blockdev write zeroes read split ...passed 00:12:42.996 Test: blockdev write zeroes read split partial ...passed 00:12:42.996 Test: blockdev reset ...passed 00:12:42.996 Test: blockdev write read 8 blocks ...passed 00:12:42.996 Test: blockdev write read size > 128k ...passed 00:12:42.996 Test: blockdev write read invalid size ...passed 00:12:42.996 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:42.996 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:42.996 Test: blockdev write read max offset ...passed 00:12:42.996 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:42.997 Test: blockdev writev readv 8 blocks ...passed 00:12:42.997 Test: blockdev writev readv 30 x 1block ...passed 00:12:42.997 Test: blockdev writev readv block ...passed 00:12:42.997 Test: blockdev writev readv size > 128k ...passed 00:12:42.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:42.997 Test: blockdev comparev and writev ...passed 00:12:42.997 Test: blockdev nvme passthru rw ...passed 00:12:42.997 Test: blockdev nvme passthru vendor specific ...passed 00:12:42.997 Test: blockdev nvme admin passthru ...passed 00:12:42.997 Test: blockdev copy ...passed 00:12:42.997 00:12:42.997 Run Summary: Type Total Ran Passed Failed Inactive 00:12:42.997 suites 6 6 n/a 0 0 00:12:42.997 tests 138 138 138 0 0 00:12:42.997 asserts 780 780 780 0 n/a 00:12:42.997 00:12:42.997 Elapsed time = 0.889 seconds 00:12:42.997 0 00:12:42.997 07:27:52 -- bdev/blockdev.sh@293 -- # killprocess 67889 00:12:42.997 07:27:52 -- common/autotest_common.sh@936 -- # '[' -z 67889 ']' 00:12:42.997 07:27:52 -- common/autotest_common.sh@940 -- # kill -0 67889 00:12:42.997 07:27:52 -- common/autotest_common.sh@941 -- # uname 00:12:42.997 07:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.997 07:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67889 00:12:42.997 killing process with pid 67889 00:12:42.997 07:27:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:42.997 07:27:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:42.997 07:27:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67889' 00:12:42.997 07:27:52 -- common/autotest_common.sh@955 -- # kill 67889 00:12:42.997 07:27:52 -- common/autotest_common.sh@960 -- # wait 67889 00:12:43.566 07:27:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:43.566 00:12:43.566 real 0m1.950s 00:12:43.566 user 0m4.706s 00:12:43.566 sys 0m0.250s 00:12:43.566 07:27:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:43.566 ************************************ 00:12:43.566 END TEST bdev_bounds 00:12:43.566 ************************************ 00:12:43.566 07:27:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.566 07:27:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:43.566 07:27:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:43.566 07:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.566 07:27:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.825 ************************************ 00:12:43.825 START TEST bdev_nbd 00:12:43.825 ************************************ 00:12:43.825 07:27:52 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:43.825 07:27:52 -- bdev/blockdev.sh@298 -- # uname -s 00:12:43.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:43.825 07:27:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:43.825 07:27:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.825 07:27:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:43.825 07:27:52 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:43.825 07:27:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:43.825 07:27:52 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:43.825 07:27:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:43.825 07:27:52 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:43.826 07:27:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:43.826 07:27:52 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:43.826 07:27:52 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.826 07:27:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:43.826 07:27:52 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:43.826 07:27:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:43.826 07:27:52 -- bdev/blockdev.sh@316 -- # nbd_pid=67944 00:12:43.826 07:27:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:43.826 07:27:52 -- bdev/blockdev.sh@318 -- # waitforlisten 67944 /var/tmp/spdk-nbd.sock 00:12:43.826 07:27:52 -- common/autotest_common.sh@829 -- # '[' -z 67944 ']' 00:12:43.826 07:27:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:43.826 07:27:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.826 07:27:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:43.826 07:27:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.826 07:27:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.826 07:27:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:43.826 [2024-11-19 07:27:52.883947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.826 [2024-11-19 07:27:52.884051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.826 [2024-11-19 07:27:53.034355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.086 [2024-11-19 07:27:53.204388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.657 07:27:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.657 07:27:53 -- common/autotest_common.sh@862 -- # return 0 00:12:44.657 07:27:53 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@24 -- # local i 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:44.657 07:27:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:44.918 07:27:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:44.918 07:27:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:44.918 07:27:53 -- common/autotest_common.sh@867 -- # local i 00:12:44.918 07:27:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.918 07:27:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.918 07:27:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:44.918 07:27:53 -- common/autotest_common.sh@871 -- # break 00:12:44.918 07:27:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.918 07:27:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.918 07:27:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.918 1+0 records in 00:12:44.918 1+0 records out 00:12:44.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004749 s, 8.6 MB/s 00:12:44.918 07:27:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.918 07:27:53 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.918 07:27:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.918 07:27:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.918 07:27:53 -- common/autotest_common.sh@887 -- # return 0 00:12:44.918 07:27:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.918 07:27:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:44.918 07:27:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:44.918 07:27:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:44.918 07:27:54 -- common/autotest_common.sh@867 -- # local i 00:12:44.918 07:27:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.918 07:27:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.918 07:27:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:44.918 07:27:54 -- common/autotest_common.sh@871 -- # break 00:12:44.918 07:27:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.918 07:27:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.918 07:27:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.918 1+0 records in 00:12:44.918 1+0 records out 00:12:44.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343503 s, 11.9 MB/s 00:12:44.918 07:27:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.918 07:27:54 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.918 07:27:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.918 07:27:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.918 07:27:54 -- common/autotest_common.sh@887 -- # return 0 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:44.918 07:27:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:45.179 07:27:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:45.179 07:27:54 -- common/autotest_common.sh@867 -- # local i 00:12:45.179 07:27:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.179 07:27:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.179 07:27:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:45.179 07:27:54 -- common/autotest_common.sh@871 -- # break 00:12:45.179 07:27:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.179 07:27:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.179 07:27:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.179 1+0 records in 00:12:45.179 1+0 records out 00:12:45.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504871 s, 8.1 MB/s 00:12:45.179 07:27:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.179 07:27:54 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.179 07:27:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.179 07:27:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.179 07:27:54 -- common/autotest_common.sh@887 -- # return 0 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:45.179 07:27:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:45.439 07:27:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:45.439 07:27:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:45.439 07:27:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:45.439 07:27:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:45.439 07:27:54 -- common/autotest_common.sh@867 -- # local i 00:12:45.439 07:27:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.439 07:27:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.439 07:27:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:45.439 07:27:54 -- common/autotest_common.sh@871 -- # break 00:12:45.439 07:27:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.439 07:27:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.439 07:27:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.439 1+0 records in 00:12:45.439 1+0 records out 00:12:45.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549767 s, 7.5 MB/s 00:12:45.439 07:27:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.440 07:27:54 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.440 07:27:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.440 07:27:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.440 07:27:54 -- common/autotest_common.sh@887 -- # return 0 00:12:45.440 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.440 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:45.440 07:27:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:45.701 07:27:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:45.701 07:27:54 -- common/autotest_common.sh@867 -- # local i 00:12:45.701 07:27:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.701 07:27:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.701 07:27:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:45.701 07:27:54 -- common/autotest_common.sh@871 -- # break 00:12:45.701 07:27:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.701 07:27:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.701 07:27:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.701 1+0 records in 00:12:45.701 1+0 records out 00:12:45.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509526 s, 8.0 MB/s 00:12:45.701 07:27:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.701 07:27:54 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.701 07:27:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.701 07:27:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.701 07:27:54 -- common/autotest_common.sh@887 -- # return 0 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:45.701 07:27:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:45.962 07:27:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:45.962 07:27:54 -- common/autotest_common.sh@867 -- # local i 00:12:45.962 07:27:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.962 07:27:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.962 07:27:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:45.962 07:27:54 -- common/autotest_common.sh@871 -- # break 00:12:45.962 07:27:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.962 07:27:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.962 07:27:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.962 1+0 records in 00:12:45.962 1+0 records out 00:12:45.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268883 s, 15.2 MB/s 00:12:45.962 07:27:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.962 07:27:54 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.962 07:27:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.962 07:27:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.962 07:27:54 -- common/autotest_common.sh@887 -- # return 0 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:45.962 07:27:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.962 07:27:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd0", 00:12:45.962 "bdev_name": "nvme0n1" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd1", 00:12:45.962 "bdev_name": "nvme1n1" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd2", 00:12:45.962 "bdev_name": "nvme1n2" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd3", 00:12:45.962 "bdev_name": "nvme1n3" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd4", 00:12:45.962 "bdev_name": "nvme2n1" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd5", 00:12:45.962 "bdev_name": "nvme3n1" 00:12:45.962 } 00:12:45.962 ]' 00:12:45.962 07:27:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:45.962 07:27:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd0", 00:12:45.962 "bdev_name": "nvme0n1" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd1", 00:12:45.962 "bdev_name": "nvme1n1" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd2", 00:12:45.962 "bdev_name": "nvme1n2" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd3", 00:12:45.962 "bdev_name": "nvme1n3" 00:12:45.962 }, 00:12:45.962 { 00:12:45.962 "nbd_device": "/dev/nbd4", 00:12:45.963 "bdev_name": "nvme2n1" 00:12:45.963 }, 00:12:45.963 { 00:12:45.963 "nbd_device": "/dev/nbd5", 00:12:45.963 "bdev_name": "nvme3n1" 00:12:45.963 } 00:12:45.963 ]' 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@51 -- # local i 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.963 07:27:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@41 -- # break 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.223 07:27:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@41 -- # break 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@41 -- # break 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.484 07:27:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@41 -- # break 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.744 07:27:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@41 -- # break 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.005 07:27:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@41 -- # break 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:47.266 07:27:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:47.527 07:27:56 -- bdev/nbd_common.sh@65 -- # true 00:12:47.527 07:27:56 -- bdev/nbd_common.sh@65 -- # count=0 00:12:47.527 07:27:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@122 -- # count=0 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@127 -- # return 0 00:12:47.528 07:27:56 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@12 -- # local i 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:47.528 /dev/nbd0 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.528 07:27:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:47.528 07:27:56 -- common/autotest_common.sh@867 -- # local i 00:12:47.528 07:27:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.528 07:27:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.528 07:27:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:47.528 07:27:56 -- common/autotest_common.sh@871 -- # break 00:12:47.528 07:27:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.528 07:27:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.528 07:27:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.528 1+0 records in 00:12:47.528 1+0 records out 00:12:47.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543365 s, 7.5 MB/s 00:12:47.528 07:27:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.528 07:27:56 -- common/autotest_common.sh@884 -- # size=4096 00:12:47.528 07:27:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.528 07:27:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.528 07:27:56 -- common/autotest_common.sh@887 -- # return 0 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:47.528 07:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:47.789 /dev/nbd1 00:12:47.789 07:27:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:47.789 07:27:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:47.789 07:27:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:47.789 07:27:56 -- common/autotest_common.sh@867 -- # local i 00:12:47.789 07:27:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.789 07:27:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.789 07:27:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:47.789 07:27:56 -- common/autotest_common.sh@871 -- # break 00:12:47.789 07:27:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.789 07:27:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.789 07:27:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.789 1+0 records in 00:12:47.789 1+0 records out 00:12:47.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364506 s, 11.2 MB/s 00:12:47.789 07:27:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.789 07:27:56 -- common/autotest_common.sh@884 -- # size=4096 00:12:47.789 07:27:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.789 07:27:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.789 07:27:56 -- common/autotest_common.sh@887 -- # return 0 00:12:47.789 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.789 07:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:47.789 07:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:48.050 /dev/nbd10 00:12:48.050 07:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:48.050 07:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:48.050 07:27:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:48.050 07:27:57 -- common/autotest_common.sh@867 -- # local i 00:12:48.050 07:27:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.050 07:27:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.050 07:27:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:48.050 07:27:57 -- common/autotest_common.sh@871 -- # break 00:12:48.050 07:27:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.050 07:27:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.050 07:27:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.050 1+0 records in 00:12:48.050 1+0 records out 00:12:48.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293985 s, 13.9 MB/s 00:12:48.050 07:27:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.050 07:27:57 -- common/autotest_common.sh@884 -- # size=4096 00:12:48.050 07:27:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.050 07:27:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.050 07:27:57 -- common/autotest_common.sh@887 -- # return 0 00:12:48.050 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.050 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:48.050 07:27:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:48.312 /dev/nbd11 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:48.312 07:27:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:48.312 07:27:57 -- common/autotest_common.sh@867 -- # local i 00:12:48.312 07:27:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.312 07:27:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.312 07:27:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:48.312 07:27:57 -- common/autotest_common.sh@871 -- # break 00:12:48.312 07:27:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.312 07:27:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.312 07:27:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.312 1+0 records in 00:12:48.312 1+0 records out 00:12:48.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487717 s, 8.4 MB/s 00:12:48.312 07:27:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.312 07:27:57 -- common/autotest_common.sh@884 -- # size=4096 00:12:48.312 07:27:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.312 07:27:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.312 07:27:57 -- common/autotest_common.sh@887 -- # return 0 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:48.312 /dev/nbd12 00:12:48.312 07:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:48.574 07:27:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:48.574 07:27:57 -- common/autotest_common.sh@867 -- # local i 00:12:48.574 07:27:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:48.574 07:27:57 -- common/autotest_common.sh@871 -- # break 00:12:48.574 07:27:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.574 1+0 records in 00:12:48.574 1+0 records out 00:12:48.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975381 s, 4.2 MB/s 00:12:48.574 07:27:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.574 07:27:57 -- common/autotest_common.sh@884 -- # size=4096 00:12:48.574 07:27:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.574 07:27:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.574 07:27:57 -- common/autotest_common.sh@887 -- # return 0 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:48.574 /dev/nbd13 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:48.574 07:27:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:48.574 07:27:57 -- common/autotest_common.sh@867 -- # local i 00:12:48.574 07:27:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:48.574 07:27:57 -- common/autotest_common.sh@871 -- # break 00:12:48.574 07:27:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.574 07:27:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.574 1+0 records in 00:12:48.574 1+0 records out 00:12:48.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622058 s, 6.6 MB/s 00:12:48.574 07:27:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.574 07:27:57 -- common/autotest_common.sh@884 -- # size=4096 00:12:48.574 07:27:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.574 07:27:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.574 07:27:57 -- common/autotest_common.sh@887 -- # return 0 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.574 07:27:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:48.835 07:27:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd0", 00:12:48.835 "bdev_name": "nvme0n1" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd1", 00:12:48.835 "bdev_name": "nvme1n1" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd10", 00:12:48.835 "bdev_name": "nvme1n2" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd11", 00:12:48.835 "bdev_name": "nvme1n3" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd12", 00:12:48.835 "bdev_name": "nvme2n1" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd13", 00:12:48.835 "bdev_name": "nvme3n1" 00:12:48.835 } 00:12:48.835 ]' 00:12:48.835 07:27:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.835 07:27:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd0", 00:12:48.835 "bdev_name": "nvme0n1" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd1", 00:12:48.835 "bdev_name": "nvme1n1" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd10", 00:12:48.835 "bdev_name": "nvme1n2" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd11", 00:12:48.835 "bdev_name": "nvme1n3" 00:12:48.835 }, 00:12:48.835 { 00:12:48.835 "nbd_device": "/dev/nbd12", 00:12:48.835 "bdev_name": "nvme2n1" 00:12:48.836 }, 00:12:48.836 { 00:12:48.836 "nbd_device": "/dev/nbd13", 00:12:48.836 "bdev_name": "nvme3n1" 00:12:48.836 } 00:12:48.836 ]' 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:48.836 /dev/nbd1 00:12:48.836 /dev/nbd10 00:12:48.836 /dev/nbd11 00:12:48.836 /dev/nbd12 00:12:48.836 /dev/nbd13' 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:48.836 /dev/nbd1 00:12:48.836 /dev/nbd10 00:12:48.836 /dev/nbd11 00:12:48.836 /dev/nbd12 00:12:48.836 /dev/nbd13' 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@65 -- # count=6 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@95 -- # count=6 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:48.836 256+0 records in 00:12:48.836 256+0 records out 00:12:48.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00904844 s, 116 MB/s 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.836 07:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:49.096 256+0 records in 00:12:49.096 256+0 records out 00:12:49.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163781 s, 6.4 MB/s 00:12:49.096 07:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:49.096 07:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:49.096 256+0 records in 00:12:49.096 256+0 records out 00:12:49.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144321 s, 7.3 MB/s 00:12:49.096 07:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:49.096 07:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:49.357 256+0 records in 00:12:49.357 256+0 records out 00:12:49.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142571 s, 7.4 MB/s 00:12:49.357 07:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:49.357 07:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:49.618 256+0 records in 00:12:49.618 256+0 records out 00:12:49.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227333 s, 4.6 MB/s 00:12:49.618 07:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:49.618 07:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:49.879 256+0 records in 00:12:49.879 256+0 records out 00:12:49.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.283237 s, 3.7 MB/s 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:49.879 256+0 records in 00:12:49.879 256+0 records out 00:12:49.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104636 s, 10.0 MB/s 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.879 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:50.153 07:27:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@51 -- # local i 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.154 07:27:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@41 -- # break 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@41 -- # break 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.413 07:27:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@41 -- # break 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.673 07:27:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@41 -- # break 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.935 07:28:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@41 -- # break 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@41 -- # break 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.196 07:28:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@65 -- # true 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@104 -- # count=0 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@109 -- # return 0 00:12:51.457 07:28:00 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:51.457 07:28:00 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:51.717 malloc_lvol_verify 00:12:51.717 07:28:00 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:51.978 8bf12f91-36d1-48b6-962f-958b827c1a07 00:12:51.978 07:28:01 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:51.978 7da4f284-71cb-465e-809b-8e30d78fb026 00:12:51.978 07:28:01 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:52.239 /dev/nbd0 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:52.239 mke2fs 1.47.0 (5-Feb-2023) 00:12:52.239 Discarding device blocks: 0/4096 done 00:12:52.239 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:52.239 00:12:52.239 Allocating group tables: 0/1 done 00:12:52.239 Writing inode tables: 0/1 done 00:12:52.239 Creating journal (1024 blocks): done 00:12:52.239 Writing superblocks and filesystem accounting information: 0/1 done 00:12:52.239 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@51 -- # local i 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.239 07:28:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@41 -- # break 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:52.500 07:28:01 -- bdev/nbd_common.sh@147 -- # return 0 00:12:52.500 07:28:01 -- bdev/blockdev.sh@324 -- # killprocess 67944 00:12:52.500 07:28:01 -- common/autotest_common.sh@936 -- # '[' -z 67944 ']' 00:12:52.500 07:28:01 -- common/autotest_common.sh@940 -- # kill -0 67944 00:12:52.500 07:28:01 -- common/autotest_common.sh@941 -- # uname 00:12:52.500 07:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.500 07:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67944 00:12:52.500 killing process with pid 67944 00:12:52.500 07:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.500 07:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.500 07:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67944' 00:12:52.500 07:28:01 -- common/autotest_common.sh@955 -- # kill 67944 00:12:52.500 07:28:01 -- common/autotest_common.sh@960 -- # wait 67944 00:12:53.443 ************************************ 00:12:53.443 END TEST bdev_nbd 00:12:53.444 ************************************ 00:12:53.444 07:28:02 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:53.444 00:12:53.444 real 0m9.678s 00:12:53.444 user 0m13.084s 00:12:53.444 sys 0m3.202s 00:12:53.444 07:28:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:53.444 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 07:28:02 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:53.444 07:28:02 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.444 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 ************************************ 00:12:53.444 START TEST bdev_fio 00:12:53.444 ************************************ 00:12:53.444 07:28:02 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@329 -- # local env_context 00:12:53.444 07:28:02 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:53.444 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:53.444 07:28:02 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:53.444 07:28:02 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:53.444 07:28:02 -- bdev/blockdev.sh@337 -- # echo '' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@337 -- # env_context= 00:12:53.444 07:28:02 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.444 07:28:02 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:53.444 07:28:02 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:53.444 07:28:02 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:53.444 07:28:02 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:53.444 07:28:02 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:53.444 07:28:02 -- common/autotest_common.sh@1290 -- # cat 00:12:53.444 07:28:02 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1303 -- # cat 00:12:53.444 07:28:02 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:53.444 07:28:02 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:53.444 07:28:02 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:53.444 07:28:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:53.444 07:28:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:53.444 07:28:02 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:53.444 07:28:02 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.444 07:28:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.444 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 ************************************ 00:12:53.444 START TEST bdev_fio_rw_verify 00:12:53.444 ************************************ 00:12:53.444 07:28:02 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.444 07:28:02 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.444 07:28:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:53.444 07:28:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.444 07:28:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:53.444 07:28:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.444 07:28:02 -- common/autotest_common.sh@1330 -- # shift 00:12:53.444 07:28:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:53.444 07:28:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.444 07:28:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.444 07:28:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:53.444 07:28:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:53.444 07:28:02 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:53.444 07:28:02 -- common/autotest_common.sh@1336 -- # break 00:12:53.444 07:28:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:53.444 07:28:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:53.705 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:53.705 fio-3.35 00:12:53.705 Starting 6 threads 00:13:05.982 00:13:05.982 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68336: Tue Nov 19 07:28:13 2024 00:13:05.982 read: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(909MiB/10002msec) 00:13:05.982 slat (usec): min=2, max=2553, avg= 5.02, stdev=15.49 00:13:05.982 clat (usec): min=75, max=194899, avg=819.26, stdev=1309.41 00:13:05.982 lat (usec): min=79, max=194909, avg=824.29, stdev=1309.89 00:13:05.982 clat percentiles (usec): 00:13:05.982 | 50.000th=[ 627], 99.000th=[ 3195], 99.900th=[ 5014], 00:13:05.982 | 99.990th=[ 8160], 99.999th=[193987] 00:13:05.982 write: IOPS=23.5k, BW=91.9MiB/s (96.3MB/s)(919MiB/10002msec); 0 zone resets 00:13:05.982 slat (usec): min=12, max=4295, avg=29.77, stdev=102.90 00:13:05.982 clat (usec): min=66, max=8034, avg=976.92, stdev=745.55 00:13:05.982 lat (usec): min=80, max=10121, avg=1006.69, stdev=760.10 00:13:05.982 clat percentiles (usec): 00:13:05.982 | 50.000th=[ 742], 99.000th=[ 3720], 99.900th=[ 5342], 99.990th=[ 6521], 00:13:05.982 | 99.999th=[ 7767] 00:13:05.982 bw ( KiB/s): min=48155, max=148169, per=98.94%, avg=93069.58, stdev=6137.93, samples=114 00:13:05.982 iops : min=12036, max=37042, avg=23266.68, stdev=1534.55, samples=114 00:13:05.982 lat (usec) : 100=0.06%, 250=6.43%, 500=23.78%, 750=26.14%, 1000=15.96% 00:13:05.982 lat (msec) : 2=19.68%, 4=7.46%, 10=0.49%, 20=0.01%, 250=0.01% 00:13:05.982 cpu : usr=45.67%, sys=31.61%, ctx=7651, majf=0, minf=22471 00:13:05.982 IO depths : 1=11.7%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.982 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.982 issued rwts: total=232784,235218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:05.982 00:13:05.982 Run status group 0 (all jobs): 00:13:05.982 READ: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=909MiB (953MB), run=10002-10002msec 00:13:05.982 WRITE: bw=91.9MiB/s (96.3MB/s), 91.9MiB/s-91.9MiB/s (96.3MB/s-96.3MB/s), io=919MiB (963MB), run=10002-10002msec 00:13:05.982 ----------------------------------------------------- 00:13:05.982 Suppressions used: 00:13:05.982 count bytes template 00:13:05.982 6 48 /usr/src/fio/parse.c 00:13:05.982 2279 218784 /usr/src/fio/iolog.c 00:13:05.982 1 8 libtcmalloc_minimal.so 00:13:05.982 1 904 libcrypto.so 00:13:05.982 ----------------------------------------------------- 00:13:05.982 00:13:05.982 00:13:05.982 real 0m11.852s 00:13:05.982 user 0m28.914s 00:13:05.982 sys 0m19.288s 00:13:05.982 07:28:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.982 ************************************ 00:13:05.982 END TEST bdev_fio_rw_verify 00:13:05.982 ************************************ 00:13:05.982 07:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.982 07:28:14 -- bdev/blockdev.sh@348 -- # rm -f 00:13:05.982 07:28:14 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.982 07:28:14 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.982 07:28:14 -- common/autotest_common.sh@1270 -- # local workload=trim 00:13:05.982 07:28:14 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:13:05.982 07:28:14 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:05.982 07:28:14 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:05.982 07:28:14 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.982 07:28:14 -- common/autotest_common.sh@1290 -- # cat 00:13:05.982 07:28:14 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:13:05.982 07:28:14 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:13:05.982 07:28:14 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:05.983 07:28:14 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3935344e-b0ac-48e3-9734-9e298966ef7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3935344e-b0ac-48e3-9734-9e298966ef7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8f9ef06-c449-4e80-b376-f44fbc5d49c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8f9ef06-c449-4e80-b376-f44fbc5d49c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "abea8d90-468b-40bb-87a1-8e5b0e983bb0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "abea8d90-468b-40bb-87a1-8e5b0e983bb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "f18f218f-88da-4f7c-8bdc-c26f5cb8fd31"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f18f218f-88da-4f7c-8bdc-c26f5cb8fd31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15fa82ef-f7ee-4b7f-9431-9a36b17ead0b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "15fa82ef-f7ee-4b7f-9431-9a36b17ead0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ea9fe6b3-9b88-4113-969f-35a69391fe19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ea9fe6b3-9b88-4113-969f-35a69391fe19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:05.983 07:28:14 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:13:05.983 07:28:14 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.983 /home/vagrant/spdk_repo/spdk 00:13:05.983 07:28:14 -- bdev/blockdev.sh@360 -- # popd 00:13:05.983 07:28:14 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:13:05.983 07:28:14 -- bdev/blockdev.sh@362 -- # return 0 00:13:05.983 00:13:05.983 real 0m12.006s 00:13:05.983 user 0m28.986s 00:13:05.983 sys 0m19.354s 00:13:05.983 07:28:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.983 07:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 END TEST bdev_fio 00:13:05.983 ************************************ 00:13:05.983 07:28:14 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:05.983 07:28:14 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:05.983 07:28:14 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:05.983 07:28:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.983 07:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 START TEST bdev_verify 00:13:05.983 ************************************ 00:13:05.983 07:28:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:05.983 [2024-11-19 07:28:14.672240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:05.983 [2024-11-19 07:28:14.672389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68505 ] 00:13:05.983 [2024-11-19 07:28:14.826574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:05.983 [2024-11-19 07:28:15.003769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.983 [2024-11-19 07:28:15.003840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.241 Running I/O for 5 seconds... 00:13:11.512 00:13:11.512 Latency(us) 00:13:11.512 [2024-11-19T07:28:20.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0x20000 00:13:11.512 nvme0n1 : 5.07 2400.40 9.38 0.00 0.00 53073.43 16434.41 67754.14 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x20000 length 0x20000 00:13:11.512 nvme0n1 : 5.07 2460.07 9.61 0.00 0.00 51814.88 12754.31 78239.90 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0x80000 00:13:11.512 nvme1n1 : 5.07 2315.67 9.05 0.00 0.00 55000.24 6452.78 77836.60 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x80000 length 0x80000 00:13:11.512 nvme1n1 : 5.05 2360.86 9.22 0.00 0.00 54018.03 3012.14 81466.29 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0x80000 00:13:11.512 nvme1n2 : 5.07 2368.77 9.25 0.00 0.00 53645.11 4864.79 73400.32 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x80000 length 0x80000 00:13:11.512 nvme1n2 : 5.07 2247.31 8.78 0.00 0.00 56572.11 15526.99 75820.11 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0x80000 00:13:11.512 nvme1n3 : 5.08 2284.51 8.92 0.00 0.00 55601.24 5116.85 76626.71 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x80000 length 0x80000 00:13:11.512 nvme1n3 : 5.06 2398.28 9.37 0.00 0.00 53051.56 12451.84 74610.22 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0xbd0bd 00:13:11.512 nvme2n1 : 5.08 2255.46 8.81 0.00 0.00 56267.75 10384.94 77030.01 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:11.512 nvme2n1 : 5.06 2075.20 8.11 0.00 0.00 61257.39 5192.47 93565.24 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0x0 length 0xa0000 00:13:11.512 nvme3n1 : 5.08 2387.48 9.33 0.00 0.00 53050.56 5494.94 78239.90 00:13:11.512 [2024-11-19T07:28:20.762Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:11.512 Verification LBA range: start 0xa0000 length 0xa0000 00:13:11.512 nvme3n1 : 5.08 2373.65 9.27 0.00 0.00 53551.47 3062.55 75416.81 00:13:11.512 [2024-11-19T07:28:20.762Z] =================================================================================================================== 00:13:11.512 [2024-11-19T07:28:20.762Z] Total : 27927.66 109.09 0.00 0.00 54641.47 3012.14 93565.24 00:13:12.111 00:13:12.111 real 0m6.681s 00:13:12.111 user 0m8.726s 00:13:12.111 sys 0m2.914s 00:13:12.111 07:28:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:12.112 07:28:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.112 ************************************ 00:13:12.112 END TEST bdev_verify 00:13:12.112 ************************************ 00:13:12.112 07:28:21 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:12.112 07:28:21 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:12.112 07:28:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.112 07:28:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.112 ************************************ 00:13:12.112 START TEST bdev_verify_big_io 00:13:12.112 ************************************ 00:13:12.112 07:28:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:12.378 [2024-11-19 07:28:21.404927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:12.378 [2024-11-19 07:28:21.405058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68605 ] 00:13:12.378 [2024-11-19 07:28:21.548863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:12.635 [2024-11-19 07:28:21.727060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.635 [2024-11-19 07:28:21.727120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.202 Running I/O for 5 seconds... 00:13:19.758 00:13:19.758 Latency(us) 00:13:19.758 [2024-11-19T07:28:29.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0x2000 00:13:19.758 nvme0n1 : 5.59 217.21 13.58 0.00 0.00 569403.28 68560.74 683994.19 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x2000 length 0x2000 00:13:19.758 nvme0n1 : 5.57 249.53 15.60 0.00 0.00 507233.96 16131.94 671088.64 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0x8000 00:13:19.758 nvme1n1 : 5.59 202.45 12.65 0.00 0.00 605252.30 44161.18 683994.19 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x8000 length 0x8000 00:13:19.758 nvme1n1 : 5.56 189.39 11.84 0.00 0.00 655884.52 50009.01 651730.31 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0x8000 00:13:19.758 nvme1n2 : 5.59 201.84 12.61 0.00 0.00 595542.60 44766.13 871124.68 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x8000 length 0x8000 00:13:19.758 nvme1n2 : 5.56 204.76 12.80 0.00 0.00 596551.19 51017.26 638824.76 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0x8000 00:13:19.758 nvme1n3 : 5.62 216.10 13.51 0.00 0.00 542158.50 19459.15 564617.85 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x8000 length 0x8000 00:13:19.758 nvme1n3 : 5.56 202.89 12.68 0.00 0.00 592046.34 49202.41 658183.09 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0xbd0b 00:13:19.758 nvme2n1 : 5.62 325.04 20.32 0.00 0.00 354226.12 15325.34 401685.27 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:19.758 nvme2n1 : 5.57 249.64 15.60 0.00 0.00 472217.63 50009.01 683994.19 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0x0 length 0xa000 00:13:19.758 nvme3n1 : 5.69 307.41 19.21 0.00 0.00 363234.05 1247.70 529127.58 00:13:19.758 [2024-11-19T07:28:29.008Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:19.758 Verification LBA range: start 0xa000 length 0xa000 00:13:19.758 nvme3n1 : 5.58 232.55 14.53 0.00 0.00 494351.44 4310.25 613013.66 00:13:19.758 [2024-11-19T07:28:29.008Z] =================================================================================================================== 00:13:19.758 [2024-11-19T07:28:29.008Z] Total : 2798.81 174.93 0.00 0.00 512715.97 1247.70 871124.68 00:13:19.758 00:13:19.758 real 0m7.495s 00:13:19.758 user 0m13.546s 00:13:19.758 sys 0m0.475s 00:13:19.758 07:28:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.758 ************************************ 00:13:19.758 END TEST bdev_verify_big_io 00:13:19.758 ************************************ 00:13:19.758 07:28:28 -- common/autotest_common.sh@10 -- # set +x 00:13:19.758 07:28:28 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.758 07:28:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:19.758 07:28:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.758 07:28:28 -- common/autotest_common.sh@10 -- # set +x 00:13:19.758 ************************************ 00:13:19.758 START TEST bdev_write_zeroes 00:13:19.758 ************************************ 00:13:19.758 07:28:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.758 [2024-11-19 07:28:28.963615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:19.758 [2024-11-19 07:28:28.963729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68709 ] 00:13:20.016 [2024-11-19 07:28:29.108689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.274 [2024-11-19 07:28:29.288242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.532 Running I/O for 1 seconds... 00:13:21.466 00:13:21.466 Latency(us) 00:13:21.466 [2024-11-19T07:28:30.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme0n1 : 1.00 11465.61 44.79 0.00 0.00 11154.00 6452.78 18854.20 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme1n1 : 1.01 11451.57 44.73 0.00 0.00 11159.34 5721.80 21173.17 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme1n2 : 1.01 11405.52 44.55 0.00 0.00 11196.65 6704.84 21878.94 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme1n3 : 1.01 11392.38 44.50 0.00 0.00 11201.96 6503.19 21878.94 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme2n1 : 1.01 15449.68 60.35 0.00 0.00 8250.60 3982.57 18854.20 00:13:21.466 [2024-11-19T07:28:30.716Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.466 nvme3n1 : 1.02 11721.77 45.79 0.00 0.00 10795.09 2659.25 20164.92 00:13:21.466 [2024-11-19T07:28:30.716Z] =================================================================================================================== 00:13:21.466 [2024-11-19T07:28:30.716Z] Total : 72886.53 284.71 0.00 0.00 10495.92 2659.25 21878.94 00:13:22.400 00:13:22.400 real 0m2.729s 00:13:22.400 user 0m2.068s 00:13:22.400 sys 0m0.486s 00:13:22.400 07:28:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.400 07:28:31 -- common/autotest_common.sh@10 -- # set +x 00:13:22.400 ************************************ 00:13:22.400 END TEST bdev_write_zeroes 00:13:22.400 ************************************ 00:13:22.658 07:28:31 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.658 07:28:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:22.658 07:28:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.658 07:28:31 -- common/autotest_common.sh@10 -- # set +x 00:13:22.658 ************************************ 00:13:22.658 START TEST bdev_json_nonenclosed 00:13:22.658 ************************************ 00:13:22.658 07:28:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.658 [2024-11-19 07:28:31.755438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.658 [2024-11-19 07:28:31.755558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68762 ] 00:13:22.658 [2024-11-19 07:28:31.905214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.918 [2024-11-19 07:28:32.088037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.918 [2024-11-19 07:28:32.088205] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:22.918 [2024-11-19 07:28:32.088223] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.179 00:13:23.179 real 0m0.683s 00:13:23.179 user 0m0.488s 00:13:23.179 sys 0m0.090s 00:13:23.179 07:28:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.179 ************************************ 00:13:23.179 END TEST bdev_json_nonenclosed 00:13:23.179 ************************************ 00:13:23.179 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.179 07:28:32 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.179 07:28:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:23.179 07:28:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.179 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.439 ************************************ 00:13:23.439 START TEST bdev_json_nonarray 00:13:23.439 ************************************ 00:13:23.439 07:28:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.439 [2024-11-19 07:28:32.505968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:23.439 [2024-11-19 07:28:32.506105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68786 ] 00:13:23.439 [2024-11-19 07:28:32.660027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.699 [2024-11-19 07:28:32.897865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.699 [2024-11-19 07:28:32.898083] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:23.699 [2024-11-19 07:28:32.898113] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:24.271 00:13:24.271 real 0m0.773s 00:13:24.271 user 0m0.550s 00:13:24.271 sys 0m0.115s 00:13:24.271 07:28:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.271 ************************************ 00:13:24.271 END TEST bdev_json_nonarray 00:13:24.271 ************************************ 00:13:24.271 07:28:33 -- common/autotest_common.sh@10 -- # set +x 00:13:24.271 07:28:33 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:24.271 07:28:33 -- bdev/blockdev.sh@809 -- # cleanup 00:13:24.271 07:28:33 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:24.271 07:28:33 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.271 07:28:33 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:24.271 07:28:33 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:25.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:35.215 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.215 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.215 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.215 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.215 ************************************ 00:13:35.215 END TEST blockdev_xnvme 00:13:35.215 ************************************ 00:13:35.215 00:13:35.215 real 1m1.544s 00:13:35.215 user 1m22.440s 00:13:35.215 sys 0m44.210s 00:13:35.215 07:28:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:35.215 07:28:42 -- common/autotest_common.sh@10 -- # set +x 00:13:35.215 07:28:43 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:35.215 07:28:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.215 07:28:43 -- common/autotest_common.sh@10 -- # set +x 00:13:35.215 ************************************ 00:13:35.215 START TEST ublk 00:13:35.215 ************************************ 00:13:35.215 07:28:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:35.215 * Looking for test storage... 00:13:35.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:35.215 07:28:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:35.215 07:28:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:35.215 07:28:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:35.215 07:28:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:35.215 07:28:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:35.215 07:28:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:35.215 07:28:43 -- scripts/common.sh@335 -- # IFS=.-: 00:13:35.215 07:28:43 -- scripts/common.sh@335 -- # read -ra ver1 00:13:35.215 07:28:43 -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.215 07:28:43 -- scripts/common.sh@336 -- # read -ra ver2 00:13:35.215 07:28:43 -- scripts/common.sh@337 -- # local 'op=<' 00:13:35.215 07:28:43 -- scripts/common.sh@339 -- # ver1_l=2 00:13:35.215 07:28:43 -- scripts/common.sh@340 -- # ver2_l=1 00:13:35.215 07:28:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:35.215 07:28:43 -- scripts/common.sh@343 -- # case "$op" in 00:13:35.215 07:28:43 -- scripts/common.sh@344 -- # : 1 00:13:35.215 07:28:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:35.215 07:28:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.215 07:28:43 -- scripts/common.sh@364 -- # decimal 1 00:13:35.215 07:28:43 -- scripts/common.sh@352 -- # local d=1 00:13:35.215 07:28:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.215 07:28:43 -- scripts/common.sh@354 -- # echo 1 00:13:35.215 07:28:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:35.215 07:28:43 -- scripts/common.sh@365 -- # decimal 2 00:13:35.215 07:28:43 -- scripts/common.sh@352 -- # local d=2 00:13:35.215 07:28:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.215 07:28:43 -- scripts/common.sh@354 -- # echo 2 00:13:35.215 07:28:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:35.215 07:28:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:35.215 07:28:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:35.215 07:28:43 -- scripts/common.sh@367 -- # return 0 00:13:35.215 07:28:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:35.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.215 --rc genhtml_branch_coverage=1 00:13:35.215 --rc genhtml_function_coverage=1 00:13:35.215 --rc genhtml_legend=1 00:13:35.215 --rc geninfo_all_blocks=1 00:13:35.215 --rc geninfo_unexecuted_blocks=1 00:13:35.215 00:13:35.215 ' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:35.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.215 --rc genhtml_branch_coverage=1 00:13:35.215 --rc genhtml_function_coverage=1 00:13:35.215 --rc genhtml_legend=1 00:13:35.215 --rc geninfo_all_blocks=1 00:13:35.215 --rc geninfo_unexecuted_blocks=1 00:13:35.215 00:13:35.215 ' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:35.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.215 --rc genhtml_branch_coverage=1 00:13:35.215 --rc genhtml_function_coverage=1 00:13:35.215 --rc genhtml_legend=1 00:13:35.215 --rc geninfo_all_blocks=1 00:13:35.215 --rc geninfo_unexecuted_blocks=1 00:13:35.215 00:13:35.215 ' 00:13:35.215 07:28:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:35.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.215 --rc genhtml_branch_coverage=1 00:13:35.215 --rc genhtml_function_coverage=1 00:13:35.215 --rc genhtml_legend=1 00:13:35.215 --rc geninfo_all_blocks=1 00:13:35.215 --rc geninfo_unexecuted_blocks=1 00:13:35.215 00:13:35.215 ' 00:13:35.215 07:28:43 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:35.216 07:28:43 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:35.216 07:28:43 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:35.216 07:28:43 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:35.216 07:28:43 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:35.216 07:28:43 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:35.216 07:28:43 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:35.216 07:28:43 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:35.216 07:28:43 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:35.216 07:28:43 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:35.216 07:28:43 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:35.216 07:28:43 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:35.216 07:28:43 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:35.216 07:28:43 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:35.216 07:28:43 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:35.216 07:28:43 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:35.216 07:28:43 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:35.216 07:28:43 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:35.216 07:28:43 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:35.216 07:28:43 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:35.216 07:28:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:35.216 07:28:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.216 07:28:43 -- common/autotest_common.sh@10 -- # set +x 00:13:35.216 ************************************ 00:13:35.216 START TEST test_save_ublk_config 00:13:35.216 ************************************ 00:13:35.216 07:28:43 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:35.216 07:28:43 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:35.216 07:28:43 -- ublk/ublk.sh@103 -- # tgtpid=69142 00:13:35.216 07:28:43 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:35.216 07:28:43 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:35.216 07:28:43 -- ublk/ublk.sh@106 -- # waitforlisten 69142 00:13:35.216 07:28:43 -- common/autotest_common.sh@829 -- # '[' -z 69142 ']' 00:13:35.216 07:28:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.216 07:28:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.216 07:28:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.216 07:28:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.216 07:28:43 -- common/autotest_common.sh@10 -- # set +x 00:13:35.216 [2024-11-19 07:28:43.245050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:35.216 [2024-11-19 07:28:43.245164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69142 ] 00:13:35.216 [2024-11-19 07:28:43.393035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.216 [2024-11-19 07:28:43.588602] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.216 [2024-11-19 07:28:43.588808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.475 07:28:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.475 07:28:44 -- common/autotest_common.sh@862 -- # return 0 00:13:35.475 07:28:44 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:35.475 07:28:44 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:35.475 07:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.475 07:28:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.475 [2024-11-19 07:28:44.723070] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:35.735 malloc0 00:13:35.735 [2024-11-19 07:28:44.794338] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:35.735 [2024-11-19 07:28:44.794447] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:35.735 [2024-11-19 07:28:44.794456] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:35.735 [2024-11-19 07:28:44.794486] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:35.735 [2024-11-19 07:28:44.803680] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:35.735 [2024-11-19 07:28:44.803720] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:35.735 [2024-11-19 07:28:44.810229] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:35.735 [2024-11-19 07:28:44.810366] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:35.735 [2024-11-19 07:28:44.827207] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:35.735 0 00:13:35.735 07:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.735 07:28:44 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:35.735 07:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.735 07:28:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.995 07:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.996 07:28:45 -- ublk/ublk.sh@115 -- # config='{ 00:13:35.996 "subsystems": [ 00:13:35.996 { 00:13:35.996 "subsystem": "iobuf", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "iobuf_set_options", 00:13:35.996 "params": { 00:13:35.996 "small_pool_count": 8192, 00:13:35.996 "large_pool_count": 1024, 00:13:35.996 "small_bufsize": 8192, 00:13:35.996 "large_bufsize": 135168 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "sock", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "sock_impl_set_options", 00:13:35.996 "params": { 00:13:35.996 "impl_name": "posix", 00:13:35.996 "recv_buf_size": 2097152, 00:13:35.996 "send_buf_size": 2097152, 00:13:35.996 "enable_recv_pipe": true, 00:13:35.996 "enable_quickack": false, 00:13:35.996 "enable_placement_id": 0, 00:13:35.996 "enable_zerocopy_send_server": true, 00:13:35.996 "enable_zerocopy_send_client": false, 00:13:35.996 "zerocopy_threshold": 0, 00:13:35.996 "tls_version": 0, 00:13:35.996 "enable_ktls": false 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "sock_impl_set_options", 00:13:35.996 "params": { 00:13:35.996 "impl_name": "ssl", 00:13:35.996 "recv_buf_size": 4096, 00:13:35.996 "send_buf_size": 4096, 00:13:35.996 "enable_recv_pipe": true, 00:13:35.996 "enable_quickack": false, 00:13:35.996 "enable_placement_id": 0, 00:13:35.996 "enable_zerocopy_send_server": true, 00:13:35.996 "enable_zerocopy_send_client": false, 00:13:35.996 "zerocopy_threshold": 0, 00:13:35.996 "tls_version": 0, 00:13:35.996 "enable_ktls": false 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "vmd", 00:13:35.996 "config": [] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "accel", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "accel_set_options", 00:13:35.996 "params": { 00:13:35.996 "small_cache_size": 128, 00:13:35.996 "large_cache_size": 16, 00:13:35.996 "task_count": 2048, 00:13:35.996 "sequence_count": 2048, 00:13:35.996 "buf_count": 2048 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "bdev", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "bdev_set_options", 00:13:35.996 "params": { 00:13:35.996 "bdev_io_pool_size": 65535, 00:13:35.996 "bdev_io_cache_size": 256, 00:13:35.996 "bdev_auto_examine": true, 00:13:35.996 "iobuf_small_cache_size": 128, 00:13:35.996 "iobuf_large_cache_size": 16 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_raid_set_options", 00:13:35.996 "params": { 00:13:35.996 "process_window_size_kb": 1024 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_iscsi_set_options", 00:13:35.996 "params": { 00:13:35.996 "timeout_sec": 30 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_nvme_set_options", 00:13:35.996 "params": { 00:13:35.996 "action_on_timeout": "none", 00:13:35.996 "timeout_us": 0, 00:13:35.996 "timeout_admin_us": 0, 00:13:35.996 "keep_alive_timeout_ms": 10000, 00:13:35.996 "transport_retry_count": 4, 00:13:35.996 "arbitration_burst": 0, 00:13:35.996 "low_priority_weight": 0, 00:13:35.996 "medium_priority_weight": 0, 00:13:35.996 "high_priority_weight": 0, 00:13:35.996 "nvme_adminq_poll_period_us": 10000, 00:13:35.996 "nvme_ioq_poll_period_us": 0, 00:13:35.996 "io_queue_requests": 0, 00:13:35.996 "delay_cmd_submit": true, 00:13:35.996 "bdev_retry_count": 3, 00:13:35.996 "transport_ack_timeout": 0, 00:13:35.996 "ctrlr_loss_timeout_sec": 0, 00:13:35.996 "reconnect_delay_sec": 0, 00:13:35.996 "fast_io_fail_timeout_sec": 0, 00:13:35.996 "generate_uuids": false, 00:13:35.996 "transport_tos": 0, 00:13:35.996 "io_path_stat": false, 00:13:35.996 "allow_accel_sequence": false 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_nvme_set_hotplug", 00:13:35.996 "params": { 00:13:35.996 "period_us": 100000, 00:13:35.996 "enable": false 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_malloc_create", 00:13:35.996 "params": { 00:13:35.996 "name": "malloc0", 00:13:35.996 "num_blocks": 8192, 00:13:35.996 "block_size": 4096, 00:13:35.996 "physical_block_size": 4096, 00:13:35.996 "uuid": "e7e95f89-939b-446d-a763-a669147412fa", 00:13:35.996 "optimal_io_boundary": 0 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "bdev_wait_for_examine" 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "scsi", 00:13:35.996 "config": null 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "scheduler", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "framework_set_scheduler", 00:13:35.996 "params": { 00:13:35.996 "name": "static" 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "vhost_scsi", 00:13:35.996 "config": [] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "vhost_blk", 00:13:35.996 "config": [] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "ublk", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "ublk_create_target", 00:13:35.996 "params": { 00:13:35.996 "cpumask": "1" 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "ublk_start_disk", 00:13:35.996 "params": { 00:13:35.996 "bdev_name": "malloc0", 00:13:35.996 "ublk_id": 0, 00:13:35.996 "num_queues": 1, 00:13:35.996 "queue_depth": 128 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "nbd", 00:13:35.996 "config": [] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "nvmf", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "nvmf_set_config", 00:13:35.996 "params": { 00:13:35.996 "discovery_filter": "match_any", 00:13:35.996 "admin_cmd_passthru": { 00:13:35.996 "identify_ctrlr": false 00:13:35.996 } 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "nvmf_set_max_subsystems", 00:13:35.996 "params": { 00:13:35.996 "max_subsystems": 1024 00:13:35.996 } 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "method": "nvmf_set_crdt", 00:13:35.996 "params": { 00:13:35.996 "crdt1": 0, 00:13:35.996 "crdt2": 0, 00:13:35.996 "crdt3": 0 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }, 00:13:35.996 { 00:13:35.996 "subsystem": "iscsi", 00:13:35.996 "config": [ 00:13:35.996 { 00:13:35.996 "method": "iscsi_set_options", 00:13:35.996 "params": { 00:13:35.996 "node_base": "iqn.2016-06.io.spdk", 00:13:35.996 "max_sessions": 128, 00:13:35.996 "max_connections_per_session": 2, 00:13:35.996 "max_queue_depth": 64, 00:13:35.996 "default_time2wait": 2, 00:13:35.996 "default_time2retain": 20, 00:13:35.996 "first_burst_length": 8192, 00:13:35.996 "immediate_data": true, 00:13:35.996 "allow_duplicated_isid": false, 00:13:35.996 "error_recovery_level": 0, 00:13:35.996 "nop_timeout": 60, 00:13:35.996 "nop_in_interval": 30, 00:13:35.996 "disable_chap": false, 00:13:35.996 "require_chap": false, 00:13:35.996 "mutual_chap": false, 00:13:35.996 "chap_group": 0, 00:13:35.996 "max_large_datain_per_connection": 64, 00:13:35.996 "max_r2t_per_connection": 4, 00:13:35.996 "pdu_pool_size": 36864, 00:13:35.996 "immediate_data_pool_size": 16384, 00:13:35.996 "data_out_pool_size": 2048 00:13:35.996 } 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 } 00:13:35.996 ] 00:13:35.996 }' 00:13:35.996 07:28:45 -- ublk/ublk.sh@116 -- # killprocess 69142 00:13:35.996 07:28:45 -- common/autotest_common.sh@936 -- # '[' -z 69142 ']' 00:13:35.996 07:28:45 -- common/autotest_common.sh@940 -- # kill -0 69142 00:13:35.996 07:28:45 -- common/autotest_common.sh@941 -- # uname 00:13:35.996 07:28:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.996 07:28:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69142 00:13:35.996 07:28:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.996 killing process with pid 69142 00:13:35.996 07:28:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.996 07:28:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69142' 00:13:35.996 07:28:45 -- common/autotest_common.sh@955 -- # kill 69142 00:13:35.996 07:28:45 -- common/autotest_common.sh@960 -- # wait 69142 00:13:37.379 [2024-11-19 07:28:46.539124] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:37.379 [2024-11-19 07:28:46.579276] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:37.379 [2024-11-19 07:28:46.579403] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:37.379 [2024-11-19 07:28:46.586205] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:37.379 [2024-11-19 07:28:46.586259] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:37.379 [2024-11-19 07:28:46.586277] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:37.379 [2024-11-19 07:28:46.586302] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:37.379 [2024-11-19 07:28:46.586436] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:38.758 07:28:47 -- ublk/ublk.sh@119 -- # tgtpid=69216 00:13:38.758 07:28:47 -- ublk/ublk.sh@121 -- # waitforlisten 69216 00:13:38.758 07:28:47 -- common/autotest_common.sh@829 -- # '[' -z 69216 ']' 00:13:38.758 07:28:47 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:38.758 07:28:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.758 07:28:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.758 07:28:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.758 07:28:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.758 07:28:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.758 07:28:47 -- ublk/ublk.sh@118 -- # echo '{ 00:13:38.758 "subsystems": [ 00:13:38.758 { 00:13:38.758 "subsystem": "iobuf", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "iobuf_set_options", 00:13:38.758 "params": { 00:13:38.758 "small_pool_count": 8192, 00:13:38.758 "large_pool_count": 1024, 00:13:38.758 "small_bufsize": 8192, 00:13:38.758 "large_bufsize": 135168 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "sock", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "sock_impl_set_options", 00:13:38.758 "params": { 00:13:38.758 "impl_name": "posix", 00:13:38.758 "recv_buf_size": 2097152, 00:13:38.758 "send_buf_size": 2097152, 00:13:38.758 "enable_recv_pipe": true, 00:13:38.758 "enable_quickack": false, 00:13:38.758 "enable_placement_id": 0, 00:13:38.758 "enable_zerocopy_send_server": true, 00:13:38.758 "enable_zerocopy_send_client": false, 00:13:38.758 "zerocopy_threshold": 0, 00:13:38.758 "tls_version": 0, 00:13:38.758 "enable_ktls": false 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "sock_impl_set_options", 00:13:38.758 "params": { 00:13:38.758 "impl_name": "ssl", 00:13:38.758 "recv_buf_size": 4096, 00:13:38.758 "send_buf_size": 4096, 00:13:38.758 "enable_recv_pipe": true, 00:13:38.758 "enable_quickack": false, 00:13:38.758 "enable_placement_id": 0, 00:13:38.758 "enable_zerocopy_send_server": true, 00:13:38.758 "enable_zerocopy_send_client": false, 00:13:38.758 "zerocopy_threshold": 0, 00:13:38.758 "tls_version": 0, 00:13:38.758 "enable_ktls": false 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "vmd", 00:13:38.758 "config": [] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "accel", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "accel_set_options", 00:13:38.758 "params": { 00:13:38.758 "small_cache_size": 128, 00:13:38.758 "large_cache_size": 16, 00:13:38.758 "task_count": 2048, 00:13:38.758 "sequence_count": 2048, 00:13:38.758 "buf_count": 2048 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "bdev", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "bdev_set_options", 00:13:38.758 "params": { 00:13:38.758 "bdev_io_pool_size": 65535, 00:13:38.758 "bdev_io_cache_size": 256, 00:13:38.758 "bdev_auto_examine": true, 00:13:38.758 "iobuf_small_cache_size": 128, 00:13:38.758 "iobuf_large_cache_size": 16 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_raid_set_options", 00:13:38.758 "params": { 00:13:38.758 "process_window_size_kb": 1024 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_iscsi_set_options", 00:13:38.758 "params": { 00:13:38.758 "timeout_sec": 30 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_nvme_set_options", 00:13:38.758 "params": { 00:13:38.758 "action_on_timeout": "none", 00:13:38.758 "timeout_us": 0, 00:13:38.758 "timeout_admin_us": 0, 00:13:38.758 "keep_alive_timeout_ms": 10000, 00:13:38.758 "transport_retry_count": 4, 00:13:38.758 "arbitration_burst": 0, 00:13:38.758 "low_priority_weight": 0, 00:13:38.758 "medium_priority_weight": 0, 00:13:38.758 "high_priority_weight": 0, 00:13:38.758 "nvme_adminq_poll_period_us": 10000, 00:13:38.758 "nvme_ioq_poll_period_us": 0, 00:13:38.758 "io_queue_requests": 0, 00:13:38.758 "delay_cmd_submit": true, 00:13:38.758 "bdev_retry_count": 3, 00:13:38.758 "transport_ack_timeout": 0, 00:13:38.758 "ctrlr_loss_timeout_sec": 0, 00:13:38.758 "reconnect_delay_sec": 0, 00:13:38.758 "fast_io_fail_timeout_sec": 0, 00:13:38.758 "generate_uuids": false, 00:13:38.758 "transport_tos": 0, 00:13:38.758 "io_path_stat": false, 00:13:38.758 "allow_accel_sequence": false 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_nvme_set_hotplug", 00:13:38.758 "params": { 00:13:38.758 "period_us": 100000, 00:13:38.758 "enable": false 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_malloc_create", 00:13:38.758 "params": { 00:13:38.758 "name": "malloc0", 00:13:38.758 "num_blocks": 8192, 00:13:38.758 "block_size": 4096, 00:13:38.758 "physical_block_size": 4096, 00:13:38.758 "uuid": "e7e95f89-939b-446d-a763-a669147412fa", 00:13:38.758 "optimal_io_boundary": 0 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "bdev_wait_for_examine" 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "scsi", 00:13:38.758 "config": null 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "scheduler", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "framework_set_scheduler", 00:13:38.758 "params": { 00:13:38.758 "name": "static" 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "vhost_scsi", 00:13:38.758 "config": [] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "vhost_blk", 00:13:38.758 "config": [] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "ublk", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "ublk_create_target", 00:13:38.758 "params": { 00:13:38.758 "cpumask": "1" 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "ublk_start_disk", 00:13:38.758 "params": { 00:13:38.758 "bdev_name": "malloc0", 00:13:38.758 "ublk_id": 0, 00:13:38.758 "num_queues": 1, 00:13:38.758 "queue_depth": 128 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "nbd", 00:13:38.758 "config": [] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "nvmf", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "nvmf_set_config", 00:13:38.758 "params": { 00:13:38.758 "discovery_filter": "match_any", 00:13:38.758 "admin_cmd_passthru": { 00:13:38.758 "identify_ctrlr": false 00:13:38.758 } 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "nvmf_set_max_subsystems", 00:13:38.758 "params": { 00:13:38.758 "max_subsystems": 1024 00:13:38.758 } 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "method": "nvmf_set_crdt", 00:13:38.758 "params": { 00:13:38.758 "crdt1": 0, 00:13:38.758 "crdt2": 0, 00:13:38.758 "crdt3": 0 00:13:38.758 } 00:13:38.758 } 00:13:38.758 ] 00:13:38.758 }, 00:13:38.758 { 00:13:38.758 "subsystem": "iscsi", 00:13:38.758 "config": [ 00:13:38.758 { 00:13:38.758 "method": "iscsi_set_options", 00:13:38.758 "params": { 00:13:38.758 "node_base": "iqn.2016-06.io.spdk", 00:13:38.758 "max_sessions": 128, 00:13:38.759 "max_connections_per_session": 2, 00:13:38.759 "max_queue_depth": 64, 00:13:38.759 "default_time2wait": 2, 00:13:38.759 "default_time2retain": 20, 00:13:38.759 "first_burst_length": 8192, 00:13:38.759 "immediate_data": true, 00:13:38.759 "allow_duplicated_isid": false, 00:13:38.759 "error_recovery_level": 0, 00:13:38.759 "nop_timeout": 60, 00:13:38.759 "nop_in_interval": 30, 00:13:38.759 "disable_chap": false, 00:13:38.759 "require_chap": false, 00:13:38.759 "mutual_chap": false, 00:13:38.759 "chap_group": 0, 00:13:38.759 "max_large_datain_per_connection": 64, 00:13:38.759 "max_r2t_per_connection": 4, 00:13:38.759 "pdu_pool_size": 36864, 00:13:38.759 "immediate_data_pool_size": 16384, 00:13:38.759 "data_out_pool_size": 2048 00:13:38.759 } 00:13:38.759 } 00:13:38.759 ] 00:13:38.759 } 00:13:38.759 ] 00:13:38.759 }' 00:13:38.759 [2024-11-19 07:28:47.896460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:38.759 [2024-11-19 07:28:47.896549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69216 ] 00:13:39.018 [2024-11-19 07:28:48.038046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.018 [2024-11-19 07:28:48.198845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.018 [2024-11-19 07:28:48.199009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.642 [2024-11-19 07:28:48.805786] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:39.643 [2024-11-19 07:28:48.813277] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:39.643 [2024-11-19 07:28:48.813335] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:39.643 [2024-11-19 07:28:48.813341] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:39.643 [2024-11-19 07:28:48.813346] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.643 [2024-11-19 07:28:48.822249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.643 [2024-11-19 07:28:48.822268] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.643 [2024-11-19 07:28:48.829201] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.643 [2024-11-19 07:28:48.829268] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:39.643 [2024-11-19 07:28:48.846203] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:40.210 07:28:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.210 07:28:49 -- common/autotest_common.sh@862 -- # return 0 00:13:40.210 07:28:49 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:40.210 07:28:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.210 07:28:49 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:40.210 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:13:40.210 07:28:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.210 07:28:49 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:40.210 07:28:49 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:40.210 07:28:49 -- ublk/ublk.sh@125 -- # killprocess 69216 00:13:40.210 07:28:49 -- common/autotest_common.sh@936 -- # '[' -z 69216 ']' 00:13:40.210 07:28:49 -- common/autotest_common.sh@940 -- # kill -0 69216 00:13:40.210 07:28:49 -- common/autotest_common.sh@941 -- # uname 00:13:40.210 07:28:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.210 07:28:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69216 00:13:40.210 07:28:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.210 killing process with pid 69216 00:13:40.210 07:28:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.210 07:28:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69216' 00:13:40.210 07:28:49 -- common/autotest_common.sh@955 -- # kill 69216 00:13:40.210 07:28:49 -- common/autotest_common.sh@960 -- # wait 69216 00:13:41.152 [2024-11-19 07:28:50.235824] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:41.152 [2024-11-19 07:28:50.265420] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:41.152 [2024-11-19 07:28:50.265764] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:41.152 [2024-11-19 07:28:50.274251] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:41.152 [2024-11-19 07:28:50.274301] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:41.152 [2024-11-19 07:28:50.274307] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:41.152 [2024-11-19 07:28:50.274331] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:41.152 [2024-11-19 07:28:50.274469] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:43.104 07:28:51 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:43.104 00:13:43.104 real 0m8.708s 00:13:43.104 user 0m6.100s 00:13:43.104 sys 0m3.498s 00:13:43.104 07:28:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.104 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:43.104 ************************************ 00:13:43.104 END TEST test_save_ublk_config 00:13:43.104 ************************************ 00:13:43.104 07:28:51 -- ublk/ublk.sh@139 -- # spdk_pid=69295 00:13:43.104 07:28:51 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.104 07:28:51 -- ublk/ublk.sh@141 -- # waitforlisten 69295 00:13:43.104 07:28:51 -- common/autotest_common.sh@829 -- # '[' -z 69295 ']' 00:13:43.104 07:28:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.104 07:28:51 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:43.104 07:28:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.104 07:28:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.104 07:28:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.104 07:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:43.104 [2024-11-19 07:28:51.989709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:43.104 [2024-11-19 07:28:51.989825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69295 ] 00:13:43.104 [2024-11-19 07:28:52.139048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.104 [2024-11-19 07:28:52.334980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.104 [2024-11-19 07:28:52.335293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.104 [2024-11-19 07:28:52.335417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.486 07:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.486 07:28:53 -- common/autotest_common.sh@862 -- # return 0 00:13:44.486 07:28:53 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:44.486 07:28:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.486 07:28:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.486 07:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.486 ************************************ 00:13:44.486 START TEST test_create_ublk 00:13:44.486 ************************************ 00:13:44.486 07:28:53 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:44.486 07:28:53 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:44.487 07:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.487 07:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.487 [2024-11-19 07:28:53.546538] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.487 07:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.487 07:28:53 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:44.487 07:28:53 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:44.487 07:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.487 07:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.746 07:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:44.746 07:28:53 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:44.746 07:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.746 07:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.746 [2024-11-19 07:28:53.784363] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:44.746 [2024-11-19 07:28:53.784802] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:44.746 [2024-11-19 07:28:53.784818] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:44.746 [2024-11-19 07:28:53.784828] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.746 [2024-11-19 07:28:53.793498] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.746 [2024-11-19 07:28:53.793531] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.746 [2024-11-19 07:28:53.798395] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.746 [2024-11-19 07:28:53.812427] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.746 [2024-11-19 07:28:53.834914] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.746 07:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:44.746 07:28:53 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:44.746 07:28:53 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:44.746 07:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.746 07:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.746 07:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:44.746 { 00:13:44.746 "ublk_device": "/dev/ublkb0", 00:13:44.746 "id": 0, 00:13:44.746 "queue_depth": 512, 00:13:44.746 "num_queues": 4, 00:13:44.746 "bdev_name": "Malloc0" 00:13:44.746 } 00:13:44.746 ]' 00:13:44.746 07:28:53 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:44.746 07:28:53 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:44.746 07:28:53 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:44.746 07:28:53 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:44.746 07:28:53 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:44.746 07:28:53 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:45.006 07:28:54 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:45.006 07:28:54 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:45.006 07:28:54 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:45.006 07:28:54 -- lvol/common.sh@41 -- # local offset=0 00:13:45.006 07:28:54 -- lvol/common.sh@42 -- # local size=134217728 00:13:45.006 07:28:54 -- lvol/common.sh@43 -- # local rw=write 00:13:45.006 07:28:54 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:45.006 07:28:54 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:45.006 07:28:54 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:45.006 07:28:54 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:45.006 07:28:54 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:45.006 07:28:54 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:45.006 07:28:54 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:45.006 fio: verification read phase will never start because write phase uses all of runtime 00:13:45.006 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:45.006 fio-3.35 00:13:45.006 Starting 1 process 00:13:55.011 00:13:55.011 fio_test: (groupid=0, jobs=1): err= 0: pid=69349: Tue Nov 19 07:29:04 2024 00:13:55.011 write: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(645MiB/10001msec); 0 zone resets 00:13:55.011 clat (usec): min=35, max=11614, avg=59.77, stdev=123.19 00:13:55.011 lat (usec): min=35, max=11623, avg=60.22, stdev=123.21 00:13:55.011 clat percentiles (usec): 00:13:55.011 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 45], 00:13:55.011 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 52], 00:13:55.011 | 70.00th=[ 58], 80.00th=[ 69], 90.00th=[ 76], 95.00th=[ 81], 00:13:55.011 | 99.00th=[ 95], 99.50th=[ 103], 99.90th=[ 2573], 99.95th=[ 3359], 00:13:55.011 | 99.99th=[ 3720] 00:13:55.011 bw ( KiB/s): min=29144, max=84648, per=100.00%, avg=66259.37, stdev=16239.09, samples=19 00:13:55.011 iops : min= 7286, max=21162, avg=16564.84, stdev=4059.77, samples=19 00:13:55.011 lat (usec) : 50=53.12%, 100=46.27%, 250=0.36%, 500=0.04%, 750=0.01% 00:13:55.011 lat (usec) : 1000=0.02% 00:13:55.011 lat (msec) : 2=0.05%, 4=0.13%, 10=0.01%, 20=0.01% 00:13:55.011 cpu : usr=2.27%, sys=13.04%, ctx=165110, majf=0, minf=797 00:13:55.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.011 issued rwts: total=0,165106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.011 00:13:55.011 Run status group 0 (all jobs): 00:13:55.011 WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=645MiB (676MB), run=10001-10001msec 00:13:55.011 00:13:55.011 Disk stats (read/write): 00:13:55.011 ublkb0: ios=0/163402, merge=0/0, ticks=0/8428, in_queue=8428, util=99.07% 00:13:55.011 07:29:04 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:55.011 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.011 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.011 [2024-11-19 07:29:04.255486] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.269 [2024-11-19 07:29:04.291778] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.269 [2024-11-19 07:29:04.292725] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.269 [2024-11-19 07:29:04.304769] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.269 [2024-11-19 07:29:04.305021] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.269 [2024-11-19 07:29:04.305032] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.269 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 07:29:04 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:55.269 07:29:04 -- common/autotest_common.sh@650 -- # local es=0 00:13:55.269 07:29:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:55.269 07:29:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:55.269 07:29:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.269 07:29:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:55.269 07:29:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.269 07:29:04 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:55.269 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 [2024-11-19 07:29:04.318286] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:55.269 request: 00:13:55.269 { 00:13:55.269 "ublk_id": 0, 00:13:55.269 "method": "ublk_stop_disk", 00:13:55.269 "req_id": 1 00:13:55.269 } 00:13:55.270 Got JSON-RPC error response 00:13:55.270 response: 00:13:55.270 { 00:13:55.270 "code": -19, 00:13:55.270 "message": "No such device" 00:13:55.270 } 00:13:55.270 07:29:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:55.270 07:29:04 -- common/autotest_common.sh@653 -- # es=1 00:13:55.270 07:29:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.270 07:29:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.270 07:29:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.270 07:29:04 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:55.270 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.270 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 [2024-11-19 07:29:04.334245] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.270 [2024-11-19 07:29:04.342196] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:55.270 [2024-11-19 07:29:04.342224] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:55.270 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.270 07:29:04 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.270 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.270 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.529 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.529 07:29:04 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:55.529 07:29:04 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.529 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.529 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.529 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.529 07:29:04 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:55.529 07:29:04 -- lvol/common.sh@26 -- # jq length 00:13:55.529 07:29:04 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:55.529 07:29:04 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:55.529 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.529 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.529 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.529 07:29:04 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:55.529 07:29:04 -- lvol/common.sh@28 -- # jq length 00:13:55.787 07:29:04 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:55.787 00:13:55.787 real 0m11.283s 00:13:55.787 user 0m0.518s 00:13:55.787 sys 0m1.390s 00:13:55.787 07:29:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:55.787 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.787 ************************************ 00:13:55.787 END TEST test_create_ublk 00:13:55.787 ************************************ 00:13:55.787 07:29:04 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:55.787 07:29:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:55.787 07:29:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.787 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.787 ************************************ 00:13:55.787 START TEST test_create_multi_ublk 00:13:55.787 ************************************ 00:13:55.787 07:29:04 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:55.787 07:29:04 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:55.787 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.787 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.787 [2024-11-19 07:29:04.869887] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:55.787 07:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.787 07:29:04 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:55.787 07:29:04 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:55.787 07:29:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.787 07:29:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:55.788 07:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.788 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.046 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.046 07:29:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:56.046 07:29:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:56.046 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.046 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.046 [2024-11-19 07:29:05.119314] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:56.046 [2024-11-19 07:29:05.119674] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:56.046 [2024-11-19 07:29:05.119687] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:56.046 [2024-11-19 07:29:05.119695] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.046 [2024-11-19 07:29:05.131215] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.046 [2024-11-19 07:29:05.131241] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.046 [2024-11-19 07:29:05.143210] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.046 [2024-11-19 07:29:05.143740] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:56.046 [2024-11-19 07:29:05.180197] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.046 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.046 07:29:05 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:56.046 07:29:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.046 07:29:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:56.046 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.046 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.305 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.305 07:29:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:56.305 07:29:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:56.305 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.305 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.305 [2024-11-19 07:29:05.441303] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:56.305 [2024-11-19 07:29:05.441626] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:56.305 [2024-11-19 07:29:05.441640] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:56.305 [2024-11-19 07:29:05.441645] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.305 [2024-11-19 07:29:05.459082] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.305 [2024-11-19 07:29:05.459100] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.305 [2024-11-19 07:29:05.464195] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.305 [2024-11-19 07:29:05.464719] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:56.305 [2024-11-19 07:29:05.492461] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.305 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.305 07:29:05 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:56.305 07:29:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.305 07:29:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:56.305 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.305 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.563 07:29:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:56.563 07:29:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:56.563 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.563 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.563 [2024-11-19 07:29:05.754311] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:56.563 [2024-11-19 07:29:05.754629] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:56.563 [2024-11-19 07:29:05.754641] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:56.563 [2024-11-19 07:29:05.754649] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.563 [2024-11-19 07:29:05.759080] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.563 [2024-11-19 07:29:05.759101] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.563 [2024-11-19 07:29:05.763196] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.563 [2024-11-19 07:29:05.763733] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:56.564 [2024-11-19 07:29:05.779205] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.564 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.564 07:29:05 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:56.564 07:29:05 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.564 07:29:05 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:56.564 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.564 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.822 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.822 07:29:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:56.822 07:29:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:56.822 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.822 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.822 [2024-11-19 07:29:05.961304] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:56.822 [2024-11-19 07:29:05.961620] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:56.822 [2024-11-19 07:29:05.961632] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:56.822 [2024-11-19 07:29:05.961637] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.822 [2024-11-19 07:29:05.969215] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.822 [2024-11-19 07:29:05.969232] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.822 [2024-11-19 07:29:05.977211] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.822 [2024-11-19 07:29:05.977736] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:56.822 [2024-11-19 07:29:05.986207] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.822 07:29:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.822 07:29:05 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:56.822 07:29:05 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:56.822 07:29:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.822 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.822 07:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.822 07:29:06 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:56.822 { 00:13:56.822 "ublk_device": "/dev/ublkb0", 00:13:56.822 "id": 0, 00:13:56.822 "queue_depth": 512, 00:13:56.822 "num_queues": 4, 00:13:56.822 "bdev_name": "Malloc0" 00:13:56.822 }, 00:13:56.822 { 00:13:56.822 "ublk_device": "/dev/ublkb1", 00:13:56.822 "id": 1, 00:13:56.822 "queue_depth": 512, 00:13:56.822 "num_queues": 4, 00:13:56.822 "bdev_name": "Malloc1" 00:13:56.822 }, 00:13:56.822 { 00:13:56.822 "ublk_device": "/dev/ublkb2", 00:13:56.822 "id": 2, 00:13:56.822 "queue_depth": 512, 00:13:56.822 "num_queues": 4, 00:13:56.822 "bdev_name": "Malloc2" 00:13:56.822 }, 00:13:56.822 { 00:13:56.822 "ublk_device": "/dev/ublkb3", 00:13:56.822 "id": 3, 00:13:56.822 "queue_depth": 512, 00:13:56.822 "num_queues": 4, 00:13:56.822 "bdev_name": "Malloc3" 00:13:56.822 } 00:13:56.822 ]' 00:13:56.822 07:29:06 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:56.822 07:29:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.822 07:29:06 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:56.822 07:29:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:56.822 07:29:06 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:56.822 07:29:06 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:56.822 07:29:06 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:57.080 07:29:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:57.080 07:29:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:57.080 07:29:06 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.080 07:29:06 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:57.080 07:29:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:57.080 07:29:06 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:57.080 07:29:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:57.080 07:29:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:57.080 07:29:06 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:57.080 07:29:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.080 07:29:06 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:57.340 07:29:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:57.340 07:29:06 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:57.340 07:29:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:57.340 07:29:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:57.340 07:29:06 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.340 07:29:06 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:57.340 07:29:06 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:57.340 07:29:06 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:57.340 07:29:06 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:57.340 07:29:06 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:57.629 07:29:06 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:57.629 07:29:06 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:57.629 07:29:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.629 07:29:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:57.629 07:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.629 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 [2024-11-19 07:29:06.646290] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.629 [2024-11-19 07:29:06.688668] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.629 [2024-11-19 07:29:06.689690] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.629 [2024-11-19 07:29:06.697236] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.629 [2024-11-19 07:29:06.697477] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:57.629 [2024-11-19 07:29:06.697491] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:57.629 07:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.629 07:29:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:57.629 07:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.629 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 [2024-11-19 07:29:06.713277] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.629 [2024-11-19 07:29:06.746751] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.629 [2024-11-19 07:29:06.747659] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.629 [2024-11-19 07:29:06.753204] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.629 [2024-11-19 07:29:06.753430] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:57.629 [2024-11-19 07:29:06.753443] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:57.629 07:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.629 07:29:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:57.629 07:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.629 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 [2024-11-19 07:29:06.766259] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.629 [2024-11-19 07:29:06.806741] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.629 [2024-11-19 07:29:06.807633] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.629 [2024-11-19 07:29:06.818294] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.629 [2024-11-19 07:29:06.818520] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:57.629 [2024-11-19 07:29:06.818537] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:57.629 07:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.629 07:29:06 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.629 07:29:06 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:57.629 07:29:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.630 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 [2024-11-19 07:29:06.831274] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.913 [2024-11-19 07:29:06.869227] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.913 [2024-11-19 07:29:06.869818] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.913 [2024-11-19 07:29:06.883199] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.913 [2024-11-19 07:29:06.883450] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:57.913 [2024-11-19 07:29:06.883465] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:57.913 07:29:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.913 07:29:06 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:57.913 [2024-11-19 07:29:07.056294] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:57.913 [2024-11-19 07:29:07.062198] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:57.913 [2024-11-19 07:29:07.062227] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:57.913 07:29:07 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:57.913 07:29:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.913 07:29:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:57.913 07:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.913 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:13:58.480 07:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.480 07:29:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.480 07:29:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:58.480 07:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.480 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:13:58.737 07:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.737 07:29:07 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.737 07:29:07 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:58.737 07:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.737 07:29:07 -- common/autotest_common.sh@10 -- # set +x 00:13:58.995 07:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.995 07:29:08 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.995 07:29:08 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:58.995 07:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.995 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:13:58.995 07:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.995 07:29:08 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:58.995 07:29:08 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:58.995 07:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.995 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:13:58.995 07:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.995 07:29:08 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:58.995 07:29:08 -- lvol/common.sh@26 -- # jq length 00:13:59.254 07:29:08 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:59.254 07:29:08 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:59.254 07:29:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.254 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:13:59.254 07:29:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.254 07:29:08 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:59.254 07:29:08 -- lvol/common.sh@28 -- # jq length 00:13:59.254 07:29:08 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:59.254 00:13:59.254 real 0m3.454s 00:13:59.254 user 0m0.799s 00:13:59.254 sys 0m0.139s 00:13:59.254 07:29:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:59.254 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:13:59.254 ************************************ 00:13:59.254 END TEST test_create_multi_ublk 00:13:59.254 ************************************ 00:13:59.254 07:29:08 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:59.254 07:29:08 -- ublk/ublk.sh@147 -- # cleanup 00:13:59.254 07:29:08 -- ublk/ublk.sh@130 -- # killprocess 69295 00:13:59.254 07:29:08 -- common/autotest_common.sh@936 -- # '[' -z 69295 ']' 00:13:59.254 07:29:08 -- common/autotest_common.sh@940 -- # kill -0 69295 00:13:59.254 07:29:08 -- common/autotest_common.sh@941 -- # uname 00:13:59.254 07:29:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.254 07:29:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69295 00:13:59.254 07:29:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:59.254 07:29:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:59.254 07:29:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69295' 00:13:59.254 killing process with pid 69295 00:13:59.254 07:29:08 -- common/autotest_common.sh@955 -- # kill 69295 00:13:59.254 07:29:08 -- common/autotest_common.sh@960 -- # wait 69295 00:13:59.820 [2024-11-19 07:29:08.926959] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.820 [2024-11-19 07:29:08.927021] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:00.388 00:14:00.388 real 0m26.609s 00:14:00.388 user 0m37.589s 00:14:00.388 sys 0m10.147s 00:14:00.388 07:29:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.388 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:14:00.388 ************************************ 00:14:00.388 END TEST ublk 00:14:00.388 ************************************ 00:14:00.647 07:29:09 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:00.647 07:29:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:00.647 07:29:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.647 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:14:00.647 ************************************ 00:14:00.647 START TEST ublk_recovery 00:14:00.647 ************************************ 00:14:00.647 07:29:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:00.647 * Looking for test storage... 00:14:00.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:00.647 07:29:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:00.647 07:29:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:00.647 07:29:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:00.647 07:29:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:00.647 07:29:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:00.647 07:29:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:00.647 07:29:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:00.647 07:29:09 -- scripts/common.sh@335 -- # IFS=.-: 00:14:00.647 07:29:09 -- scripts/common.sh@335 -- # read -ra ver1 00:14:00.647 07:29:09 -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.647 07:29:09 -- scripts/common.sh@336 -- # read -ra ver2 00:14:00.647 07:29:09 -- scripts/common.sh@337 -- # local 'op=<' 00:14:00.647 07:29:09 -- scripts/common.sh@339 -- # ver1_l=2 00:14:00.647 07:29:09 -- scripts/common.sh@340 -- # ver2_l=1 00:14:00.647 07:29:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:00.647 07:29:09 -- scripts/common.sh@343 -- # case "$op" in 00:14:00.647 07:29:09 -- scripts/common.sh@344 -- # : 1 00:14:00.647 07:29:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:00.647 07:29:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.647 07:29:09 -- scripts/common.sh@364 -- # decimal 1 00:14:00.647 07:29:09 -- scripts/common.sh@352 -- # local d=1 00:14:00.647 07:29:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.647 07:29:09 -- scripts/common.sh@354 -- # echo 1 00:14:00.647 07:29:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:00.647 07:29:09 -- scripts/common.sh@365 -- # decimal 2 00:14:00.647 07:29:09 -- scripts/common.sh@352 -- # local d=2 00:14:00.647 07:29:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.647 07:29:09 -- scripts/common.sh@354 -- # echo 2 00:14:00.648 07:29:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:00.648 07:29:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:00.648 07:29:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:00.648 07:29:09 -- scripts/common.sh@367 -- # return 0 00:14:00.648 07:29:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.648 07:29:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.648 --rc genhtml_branch_coverage=1 00:14:00.648 --rc genhtml_function_coverage=1 00:14:00.648 --rc genhtml_legend=1 00:14:00.648 --rc geninfo_all_blocks=1 00:14:00.648 --rc geninfo_unexecuted_blocks=1 00:14:00.648 00:14:00.648 ' 00:14:00.648 07:29:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.648 --rc genhtml_branch_coverage=1 00:14:00.648 --rc genhtml_function_coverage=1 00:14:00.648 --rc genhtml_legend=1 00:14:00.648 --rc geninfo_all_blocks=1 00:14:00.648 --rc geninfo_unexecuted_blocks=1 00:14:00.648 00:14:00.648 ' 00:14:00.648 07:29:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.648 --rc genhtml_branch_coverage=1 00:14:00.648 --rc genhtml_function_coverage=1 00:14:00.648 --rc genhtml_legend=1 00:14:00.648 --rc geninfo_all_blocks=1 00:14:00.648 --rc geninfo_unexecuted_blocks=1 00:14:00.648 00:14:00.648 ' 00:14:00.648 07:29:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.648 --rc genhtml_branch_coverage=1 00:14:00.648 --rc genhtml_function_coverage=1 00:14:00.648 --rc genhtml_legend=1 00:14:00.648 --rc geninfo_all_blocks=1 00:14:00.648 --rc geninfo_unexecuted_blocks=1 00:14:00.648 00:14:00.648 ' 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:00.648 07:29:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:00.648 07:29:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:00.648 07:29:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:00.648 07:29:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:00.648 07:29:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:00.648 07:29:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:00.648 07:29:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:00.648 07:29:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69695 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69695 00:14:00.648 07:29:09 -- common/autotest_common.sh@829 -- # '[' -z 69695 ']' 00:14:00.648 07:29:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.648 07:29:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.648 07:29:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.648 07:29:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.648 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:14:00.648 07:29:09 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:00.648 [2024-11-19 07:29:09.882581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.648 [2024-11-19 07:29:09.882682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69695 ] 00:14:00.906 [2024-11-19 07:29:10.027981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.165 [2024-11-19 07:29:10.205082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.165 [2024-11-19 07:29:10.205375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.165 [2024-11-19 07:29:10.205597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.731 07:29:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.731 07:29:10 -- common/autotest_common.sh@862 -- # return 0 00:14:01.731 07:29:10 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:01.731 07:29:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.731 07:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.731 [2024-11-19 07:29:10.718873] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.731 07:29:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.731 07:29:10 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:01.731 07:29:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.731 07:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.731 malloc0 00:14:01.731 07:29:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.731 07:29:10 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:01.731 07:29:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.731 07:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.731 [2024-11-19 07:29:10.813318] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:01.731 [2024-11-19 07:29:10.813413] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:01.731 [2024-11-19 07:29:10.813420] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:01.731 [2024-11-19 07:29:10.813428] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.731 [2024-11-19 07:29:10.822302] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.731 [2024-11-19 07:29:10.822325] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.731 [2024-11-19 07:29:10.829200] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.731 [2024-11-19 07:29:10.829325] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:01.731 [2024-11-19 07:29:10.840199] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.731 1 00:14:01.731 07:29:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.731 07:29:10 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:02.666 07:29:11 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69730 00:14:02.666 07:29:11 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:02.666 07:29:11 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:02.925 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:02.925 fio-3.35 00:14:02.925 Starting 1 process 00:14:08.195 07:29:16 -- ublk/ublk_recovery.sh@36 -- # kill -9 69695 00:14:08.195 07:29:16 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:13.477 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69695 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:13.477 07:29:21 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69842 00:14:13.477 07:29:21 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:13.477 07:29:21 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.477 07:29:21 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69842 00:14:13.477 07:29:21 -- common/autotest_common.sh@829 -- # '[' -z 69842 ']' 00:14:13.477 07:29:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.477 07:29:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.477 07:29:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.477 07:29:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.477 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:14:13.477 [2024-11-19 07:29:21.930972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.477 [2024-11-19 07:29:21.931084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69842 ] 00:14:13.477 [2024-11-19 07:29:22.079108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.477 [2024-11-19 07:29:22.259277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:13.477 [2024-11-19 07:29:22.259682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.477 [2024-11-19 07:29:22.259709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.417 07:29:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.417 07:29:23 -- common/autotest_common.sh@862 -- # return 0 00:14:14.417 07:29:23 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:14.417 07:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.417 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:14:14.417 [2024-11-19 07:29:23.426091] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.417 07:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.417 07:29:23 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:14.417 07:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.417 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:14:14.417 malloc0 00:14:14.417 07:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.417 07:29:23 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:14.417 07:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.417 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:14:14.417 [2024-11-19 07:29:23.529323] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:14.417 [2024-11-19 07:29:23.529365] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:14.417 [2024-11-19 07:29:23.529373] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:14.417 [2024-11-19 07:29:23.538230] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:14.417 [2024-11-19 07:29:23.538251] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:14.417 1 00:14:14.417 [2024-11-19 07:29:23.538323] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:14.417 07:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.417 07:29:23 -- ublk/ublk_recovery.sh@52 -- # wait 69730 00:14:40.974 [2024-11-19 07:29:47.268210] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:40.974 [2024-11-19 07:29:47.274777] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:40.974 [2024-11-19 07:29:47.282378] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:40.974 [2024-11-19 07:29:47.282402] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:02.908 00:15:02.908 fio_test: (groupid=0, jobs=1): err= 0: pid=69733: Tue Nov 19 07:30:12 2024 00:15:02.908 read: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(3347MiB/60002msec) 00:15:02.908 slat (nsec): min=1127, max=206521, avg=5148.17, stdev=1482.53 00:15:02.908 clat (usec): min=596, max=30439k, avg=4474.20, stdev=265088.87 00:15:02.908 lat (usec): min=601, max=30439k, avg=4479.35, stdev=265088.86 00:15:02.908 clat percentiles (usec): 00:15:02.908 | 1.00th=[ 1696], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1893], 00:15:02.908 | 30.00th=[ 1942], 40.00th=[ 1975], 50.00th=[ 2024], 60.00th=[ 2089], 00:15:02.908 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2376], 95.00th=[ 3130], 00:15:02.908 | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 7504], 99.95th=[ 9110], 00:15:02.908 | 99.99th=[13304] 00:15:02.908 bw ( KiB/s): min=32616, max=132055, per=100.00%, avg=114139.88, stdev=16811.90, samples=59 00:15:02.908 iops : min= 8154, max=33013, avg=28534.98, stdev=4202.96, samples=59 00:15:02.908 write: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(3342MiB/60002msec); 0 zone resets 00:15:02.908 slat (nsec): min=1181, max=183189, avg=5241.99, stdev=1616.43 00:15:02.908 clat (usec): min=631, max=30439k, avg=4484.84, stdev=261163.49 00:15:02.908 lat (usec): min=636, max=30439k, avg=4490.09, stdev=261163.48 00:15:02.908 clat percentiles (usec): 00:15:02.908 | 1.00th=[ 1762], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1991], 00:15:02.908 | 30.00th=[ 2024], 40.00th=[ 2057], 50.00th=[ 2114], 60.00th=[ 2180], 00:15:02.908 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2442], 95.00th=[ 3064], 00:15:02.908 | 99.00th=[ 5211], 99.50th=[ 5866], 99.90th=[ 7439], 99.95th=[ 8979], 00:15:02.908 | 99.99th=[13173] 00:15:02.908 bw ( KiB/s): min=32680, max=131872, per=100.00%, avg=113992.46, stdev=16810.90, samples=59 00:15:02.908 iops : min= 8170, max=32968, avg=28498.10, stdev=4202.71, samples=59 00:15:02.908 lat (usec) : 750=0.01%, 1000=0.01% 00:15:02.908 lat (msec) : 2=34.72%, 4=62.49%, 10=2.73%, 20=0.04%, >=2000=0.01% 00:15:02.908 cpu : usr=3.19%, sys=15.06%, ctx=57459, majf=0, minf=15 00:15:02.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:02.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.908 issued rwts: total=856762,855555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.908 00:15:02.908 Run status group 0 (all jobs): 00:15:02.908 READ: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=3347MiB (3509MB), run=60002-60002msec 00:15:02.908 WRITE: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=3342MiB (3504MB), run=60002-60002msec 00:15:02.908 00:15:02.908 Disk stats (read/write): 00:15:02.908 ublkb1: ios=853662/852583, merge=0/0, ticks=3782892/3716254, in_queue=7499146, util=99.90% 00:15:02.908 07:30:12 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:02.908 07:30:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.908 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:15:02.908 [2024-11-19 07:30:12.097458] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:02.908 [2024-11-19 07:30:12.134212] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:02.908 [2024-11-19 07:30:12.134349] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:02.908 [2024-11-19 07:30:12.144220] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:02.908 [2024-11-19 07:30:12.144316] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:02.908 [2024-11-19 07:30:12.144324] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:02.908 07:30:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.908 07:30:12 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:02.909 07:30:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.909 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:15:02.909 [2024-11-19 07:30:12.159264] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:03.167 [2024-11-19 07:30:12.162901] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:03.167 [2024-11-19 07:30:12.162929] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:03.167 07:30:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.167 07:30:12 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:03.167 07:30:12 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:03.167 07:30:12 -- ublk/ublk_recovery.sh@14 -- # killprocess 69842 00:15:03.167 07:30:12 -- common/autotest_common.sh@936 -- # '[' -z 69842 ']' 00:15:03.167 07:30:12 -- common/autotest_common.sh@940 -- # kill -0 69842 00:15:03.167 07:30:12 -- common/autotest_common.sh@941 -- # uname 00:15:03.167 07:30:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.167 07:30:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69842 00:15:03.167 07:30:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.167 killing process with pid 69842 00:15:03.167 07:30:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.167 07:30:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69842' 00:15:03.167 07:30:12 -- common/autotest_common.sh@955 -- # kill 69842 00:15:03.167 07:30:12 -- common/autotest_common.sh@960 -- # wait 69842 00:15:04.106 [2024-11-19 07:30:13.226955] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:04.106 [2024-11-19 07:30:13.227002] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:05.047 00:15:05.047 real 1m4.266s 00:15:05.047 user 1m48.474s 00:15:05.047 sys 0m20.216s 00:15:05.047 07:30:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:05.047 ************************************ 00:15:05.047 END TEST ublk_recovery 00:15:05.047 ************************************ 00:15:05.047 07:30:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.047 07:30:13 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:13 -- spdk/autotest.sh@255 -- # timing_exit lib 00:15:05.047 07:30:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.047 07:30:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.047 07:30:14 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:15:05.047 07:30:14 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:05.047 07:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.047 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:15:05.047 ************************************ 00:15:05.047 START TEST ftl 00:15:05.047 ************************************ 00:15:05.047 07:30:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:05.047 * Looking for test storage... 00:15:05.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.047 07:30:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:05.047 07:30:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:05.047 07:30:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:05.047 07:30:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:05.047 07:30:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:05.047 07:30:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:05.047 07:30:14 -- scripts/common.sh@335 -- # IFS=.-: 00:15:05.047 07:30:14 -- scripts/common.sh@335 -- # read -ra ver1 00:15:05.047 07:30:14 -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.047 07:30:14 -- scripts/common.sh@336 -- # read -ra ver2 00:15:05.047 07:30:14 -- scripts/common.sh@337 -- # local 'op=<' 00:15:05.047 07:30:14 -- scripts/common.sh@339 -- # ver1_l=2 00:15:05.047 07:30:14 -- scripts/common.sh@340 -- # ver2_l=1 00:15:05.047 07:30:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:05.047 07:30:14 -- scripts/common.sh@343 -- # case "$op" in 00:15:05.047 07:30:14 -- scripts/common.sh@344 -- # : 1 00:15:05.047 07:30:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:05.047 07:30:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.047 07:30:14 -- scripts/common.sh@364 -- # decimal 1 00:15:05.047 07:30:14 -- scripts/common.sh@352 -- # local d=1 00:15:05.047 07:30:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.047 07:30:14 -- scripts/common.sh@354 -- # echo 1 00:15:05.047 07:30:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:05.047 07:30:14 -- scripts/common.sh@365 -- # decimal 2 00:15:05.047 07:30:14 -- scripts/common.sh@352 -- # local d=2 00:15:05.047 07:30:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.047 07:30:14 -- scripts/common.sh@354 -- # echo 2 00:15:05.047 07:30:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:05.047 07:30:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:05.047 07:30:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:05.047 07:30:14 -- scripts/common.sh@367 -- # return 0 00:15:05.047 07:30:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:05.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.047 --rc genhtml_branch_coverage=1 00:15:05.047 --rc genhtml_function_coverage=1 00:15:05.047 --rc genhtml_legend=1 00:15:05.047 --rc geninfo_all_blocks=1 00:15:05.047 --rc geninfo_unexecuted_blocks=1 00:15:05.047 00:15:05.047 ' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:05.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.047 --rc genhtml_branch_coverage=1 00:15:05.047 --rc genhtml_function_coverage=1 00:15:05.047 --rc genhtml_legend=1 00:15:05.047 --rc geninfo_all_blocks=1 00:15:05.047 --rc geninfo_unexecuted_blocks=1 00:15:05.047 00:15:05.047 ' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:05.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.047 --rc genhtml_branch_coverage=1 00:15:05.047 --rc genhtml_function_coverage=1 00:15:05.047 --rc genhtml_legend=1 00:15:05.047 --rc geninfo_all_blocks=1 00:15:05.047 --rc geninfo_unexecuted_blocks=1 00:15:05.047 00:15:05.047 ' 00:15:05.047 07:30:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:05.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.047 --rc genhtml_branch_coverage=1 00:15:05.047 --rc genhtml_function_coverage=1 00:15:05.047 --rc genhtml_legend=1 00:15:05.047 --rc geninfo_all_blocks=1 00:15:05.047 --rc geninfo_unexecuted_blocks=1 00:15:05.047 00:15:05.047 ' 00:15:05.047 07:30:14 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:05.047 07:30:14 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:05.047 07:30:14 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.047 07:30:14 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.047 07:30:14 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:05.047 07:30:14 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:05.047 07:30:14 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.047 07:30:14 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:05.047 07:30:14 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:05.047 07:30:14 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.047 07:30:14 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.047 07:30:14 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:05.047 07:30:14 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:05.047 07:30:14 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:05.047 07:30:14 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:05.047 07:30:14 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:05.047 07:30:14 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:05.047 07:30:14 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.047 07:30:14 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.047 07:30:14 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:05.047 07:30:14 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:05.047 07:30:14 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:05.047 07:30:14 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:05.047 07:30:14 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:05.047 07:30:14 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:05.047 07:30:14 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:05.047 07:30:14 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:05.047 07:30:14 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:05.047 07:30:14 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:05.047 07:30:14 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.048 07:30:14 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:05.048 07:30:14 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:05.048 07:30:14 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:05.048 07:30:14 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:05.048 07:30:14 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:05.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:05.614 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:05.614 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:05.614 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:05.614 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:05.614 07:30:14 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70657 00:15:05.614 07:30:14 -- ftl/ftl.sh@38 -- # waitforlisten 70657 00:15:05.614 07:30:14 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:05.614 07:30:14 -- common/autotest_common.sh@829 -- # '[' -z 70657 ']' 00:15:05.614 07:30:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.614 07:30:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.614 07:30:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.614 07:30:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.614 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:15:05.614 [2024-11-19 07:30:14.713330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:05.614 [2024-11-19 07:30:14.713699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70657 ] 00:15:05.614 [2024-11-19 07:30:14.856760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.873 [2024-11-19 07:30:15.034424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.873 [2024-11-19 07:30:15.034627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.438 07:30:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.438 07:30:15 -- common/autotest_common.sh@862 -- # return 0 00:15:06.438 07:30:15 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:06.438 07:30:15 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:07.371 07:30:16 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:07.371 07:30:16 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:07.711 07:30:16 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:07.711 07:30:16 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:07.711 07:30:16 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:07.969 07:30:17 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:15:07.969 07:30:17 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:07.969 07:30:17 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:15:07.969 07:30:17 -- ftl/ftl.sh@50 -- # break 00:15:07.969 07:30:17 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:15:07.969 07:30:17 -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:07.969 07:30:17 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:07.969 07:30:17 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:07.969 07:30:17 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:15:07.969 07:30:17 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:07.969 07:30:17 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:15:07.969 07:30:17 -- ftl/ftl.sh@63 -- # break 00:15:07.969 07:30:17 -- ftl/ftl.sh@66 -- # killprocess 70657 00:15:07.969 07:30:17 -- common/autotest_common.sh@936 -- # '[' -z 70657 ']' 00:15:07.969 07:30:17 -- common/autotest_common.sh@940 -- # kill -0 70657 00:15:07.969 07:30:17 -- common/autotest_common.sh@941 -- # uname 00:15:07.969 07:30:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.969 07:30:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70657 00:15:08.228 07:30:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.228 killing process with pid 70657 00:15:08.228 07:30:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.228 07:30:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70657' 00:15:08.228 07:30:17 -- common/autotest_common.sh@955 -- # kill 70657 00:15:08.228 07:30:17 -- common/autotest_common.sh@960 -- # wait 70657 00:15:09.602 07:30:18 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:15:09.602 07:30:18 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:09.602 07:30:18 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:09.602 07:30:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:09.602 07:30:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.602 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:15:09.602 ************************************ 00:15:09.602 START TEST ftl_fio_basic 00:15:09.602 ************************************ 00:15:09.602 07:30:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:09.602 * Looking for test storage... 00:15:09.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.602 07:30:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:09.602 07:30:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:09.602 07:30:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:09.602 07:30:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:09.602 07:30:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:09.602 07:30:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:09.602 07:30:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:09.602 07:30:18 -- scripts/common.sh@335 -- # IFS=.-: 00:15:09.602 07:30:18 -- scripts/common.sh@335 -- # read -ra ver1 00:15:09.602 07:30:18 -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.602 07:30:18 -- scripts/common.sh@336 -- # read -ra ver2 00:15:09.602 07:30:18 -- scripts/common.sh@337 -- # local 'op=<' 00:15:09.602 07:30:18 -- scripts/common.sh@339 -- # ver1_l=2 00:15:09.602 07:30:18 -- scripts/common.sh@340 -- # ver2_l=1 00:15:09.602 07:30:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:09.602 07:30:18 -- scripts/common.sh@343 -- # case "$op" in 00:15:09.602 07:30:18 -- scripts/common.sh@344 -- # : 1 00:15:09.602 07:30:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:09.602 07:30:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.602 07:30:18 -- scripts/common.sh@364 -- # decimal 1 00:15:09.602 07:30:18 -- scripts/common.sh@352 -- # local d=1 00:15:09.602 07:30:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.602 07:30:18 -- scripts/common.sh@354 -- # echo 1 00:15:09.602 07:30:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:09.602 07:30:18 -- scripts/common.sh@365 -- # decimal 2 00:15:09.602 07:30:18 -- scripts/common.sh@352 -- # local d=2 00:15:09.602 07:30:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.602 07:30:18 -- scripts/common.sh@354 -- # echo 2 00:15:09.602 07:30:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:09.602 07:30:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:09.602 07:30:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:09.602 07:30:18 -- scripts/common.sh@367 -- # return 0 00:15:09.602 07:30:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.602 07:30:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.602 --rc genhtml_branch_coverage=1 00:15:09.602 --rc genhtml_function_coverage=1 00:15:09.602 --rc genhtml_legend=1 00:15:09.602 --rc geninfo_all_blocks=1 00:15:09.602 --rc geninfo_unexecuted_blocks=1 00:15:09.602 00:15:09.602 ' 00:15:09.603 07:30:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.603 --rc genhtml_branch_coverage=1 00:15:09.603 --rc genhtml_function_coverage=1 00:15:09.603 --rc genhtml_legend=1 00:15:09.603 --rc geninfo_all_blocks=1 00:15:09.603 --rc geninfo_unexecuted_blocks=1 00:15:09.603 00:15:09.603 ' 00:15:09.603 07:30:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.603 --rc genhtml_branch_coverage=1 00:15:09.603 --rc genhtml_function_coverage=1 00:15:09.603 --rc genhtml_legend=1 00:15:09.603 --rc geninfo_all_blocks=1 00:15:09.603 --rc geninfo_unexecuted_blocks=1 00:15:09.603 00:15:09.603 ' 00:15:09.603 07:30:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:09.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.603 --rc genhtml_branch_coverage=1 00:15:09.603 --rc genhtml_function_coverage=1 00:15:09.603 --rc genhtml_legend=1 00:15:09.603 --rc geninfo_all_blocks=1 00:15:09.603 --rc geninfo_unexecuted_blocks=1 00:15:09.603 00:15:09.603 ' 00:15:09.603 07:30:18 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.603 07:30:18 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:09.603 07:30:18 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.603 07:30:18 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.603 07:30:18 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.603 07:30:18 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.603 07:30:18 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.603 07:30:18 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.603 07:30:18 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.603 07:30:18 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.603 07:30:18 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.603 07:30:18 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.603 07:30:18 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.603 07:30:18 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.603 07:30:18 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.603 07:30:18 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.603 07:30:18 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.603 07:30:18 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.603 07:30:18 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.603 07:30:18 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.603 07:30:18 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.603 07:30:18 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.603 07:30:18 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.603 07:30:18 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.603 07:30:18 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.603 07:30:18 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.603 07:30:18 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.603 07:30:18 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.603 07:30:18 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.603 07:30:18 -- ftl/fio.sh@11 -- # declare -A suite 00:15:09.603 07:30:18 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.603 07:30:18 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:09.603 07:30:18 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:09.603 07:30:18 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.603 07:30:18 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:15:09.603 07:30:18 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:15:09.603 07:30:18 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.603 07:30:18 -- ftl/fio.sh@26 -- # uuid= 00:15:09.603 07:30:18 -- ftl/fio.sh@27 -- # timeout=240 00:15:09.603 07:30:18 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:09.603 07:30:18 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:09.603 07:30:18 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:09.603 07:30:18 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:09.603 07:30:18 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.603 07:30:18 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.603 07:30:18 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:09.603 07:30:18 -- ftl/fio.sh@45 -- # svcpid=70783 00:15:09.603 07:30:18 -- ftl/fio.sh@46 -- # waitforlisten 70783 00:15:09.603 07:30:18 -- common/autotest_common.sh@829 -- # '[' -z 70783 ']' 00:15:09.603 07:30:18 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:09.603 07:30:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.603 07:30:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.603 07:30:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.603 07:30:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.603 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:15:09.603 [2024-11-19 07:30:18.810486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:09.603 [2024-11-19 07:30:18.810705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70783 ] 00:15:09.862 [2024-11-19 07:30:18.958815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:10.127 [2024-11-19 07:30:19.139478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.127 [2024-11-19 07:30:19.140081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.127 [2024-11-19 07:30:19.140397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.127 [2024-11-19 07:30:19.140527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.063 07:30:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.063 07:30:20 -- common/autotest_common.sh@862 -- # return 0 00:15:11.063 07:30:20 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:11.063 07:30:20 -- ftl/common.sh@54 -- # local name=nvme0 00:15:11.063 07:30:20 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:11.063 07:30:20 -- ftl/common.sh@56 -- # local size=103424 00:15:11.063 07:30:20 -- ftl/common.sh@59 -- # local base_bdev 00:15:11.063 07:30:20 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:11.321 07:30:20 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:11.321 07:30:20 -- ftl/common.sh@62 -- # local base_size 00:15:11.321 07:30:20 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:11.321 07:30:20 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:11.321 07:30:20 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:11.321 07:30:20 -- common/autotest_common.sh@1369 -- # local bs 00:15:11.321 07:30:20 -- common/autotest_common.sh@1370 -- # local nb 00:15:11.321 07:30:20 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:11.579 07:30:20 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:11.579 { 00:15:11.579 "name": "nvme0n1", 00:15:11.579 "aliases": [ 00:15:11.579 "f1767215-a187-4125-bc05-1c02ccc0ebcb" 00:15:11.579 ], 00:15:11.579 "product_name": "NVMe disk", 00:15:11.579 "block_size": 4096, 00:15:11.579 "num_blocks": 1310720, 00:15:11.579 "uuid": "f1767215-a187-4125-bc05-1c02ccc0ebcb", 00:15:11.579 "assigned_rate_limits": { 00:15:11.579 "rw_ios_per_sec": 0, 00:15:11.579 "rw_mbytes_per_sec": 0, 00:15:11.579 "r_mbytes_per_sec": 0, 00:15:11.579 "w_mbytes_per_sec": 0 00:15:11.579 }, 00:15:11.579 "claimed": false, 00:15:11.579 "zoned": false, 00:15:11.579 "supported_io_types": { 00:15:11.579 "read": true, 00:15:11.579 "write": true, 00:15:11.579 "unmap": true, 00:15:11.579 "write_zeroes": true, 00:15:11.579 "flush": true, 00:15:11.579 "reset": true, 00:15:11.579 "compare": true, 00:15:11.579 "compare_and_write": false, 00:15:11.579 "abort": true, 00:15:11.579 "nvme_admin": true, 00:15:11.579 "nvme_io": true 00:15:11.579 }, 00:15:11.579 "driver_specific": { 00:15:11.579 "nvme": [ 00:15:11.579 { 00:15:11.579 "pci_address": "0000:00:07.0", 00:15:11.579 "trid": { 00:15:11.579 "trtype": "PCIe", 00:15:11.579 "traddr": "0000:00:07.0" 00:15:11.579 }, 00:15:11.579 "ctrlr_data": { 00:15:11.579 "cntlid": 0, 00:15:11.579 "vendor_id": "0x1b36", 00:15:11.579 "model_number": "QEMU NVMe Ctrl", 00:15:11.579 "serial_number": "12341", 00:15:11.579 "firmware_revision": "8.0.0", 00:15:11.579 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:11.579 "oacs": { 00:15:11.579 "security": 0, 00:15:11.579 "format": 1, 00:15:11.579 "firmware": 0, 00:15:11.579 "ns_manage": 1 00:15:11.579 }, 00:15:11.579 "multi_ctrlr": false, 00:15:11.579 "ana_reporting": false 00:15:11.579 }, 00:15:11.579 "vs": { 00:15:11.579 "nvme_version": "1.4" 00:15:11.579 }, 00:15:11.579 "ns_data": { 00:15:11.579 "id": 1, 00:15:11.579 "can_share": false 00:15:11.579 } 00:15:11.579 } 00:15:11.579 ], 00:15:11.579 "mp_policy": "active_passive" 00:15:11.579 } 00:15:11.579 } 00:15:11.579 ]' 00:15:11.579 07:30:20 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:11.579 07:30:20 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:11.579 07:30:20 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:11.579 07:30:20 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:11.579 07:30:20 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:11.579 07:30:20 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:11.579 07:30:20 -- ftl/common.sh@63 -- # base_size=5120 00:15:11.579 07:30:20 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:11.579 07:30:20 -- ftl/common.sh@67 -- # clear_lvols 00:15:11.579 07:30:20 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:11.579 07:30:20 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:11.836 07:30:20 -- ftl/common.sh@28 -- # stores= 00:15:11.836 07:30:20 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:12.094 07:30:21 -- ftl/common.sh@68 -- # lvs=ecc5d28e-ea0a-4164-863a-ba6e3ec6752f 00:15:12.094 07:30:21 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ecc5d28e-ea0a-4164-863a-ba6e3ec6752f 00:15:12.353 07:30:21 -- ftl/fio.sh@48 -- # split_bdev=232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- ftl/common.sh@35 -- # local name=nvc0 00:15:12.353 07:30:21 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:12.353 07:30:21 -- ftl/common.sh@37 -- # local base_bdev=232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- ftl/common.sh@38 -- # local cache_size= 00:15:12.353 07:30:21 -- ftl/common.sh@41 -- # get_bdev_size 232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- common/autotest_common.sh@1367 -- # local bdev_name=232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:12.353 07:30:21 -- common/autotest_common.sh@1369 -- # local bs 00:15:12.353 07:30:21 -- common/autotest_common.sh@1370 -- # local nb 00:15:12.353 07:30:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.353 07:30:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:12.353 { 00:15:12.353 "name": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:12.353 "aliases": [ 00:15:12.353 "lvs/nvme0n1p0" 00:15:12.353 ], 00:15:12.353 "product_name": "Logical Volume", 00:15:12.353 "block_size": 4096, 00:15:12.353 "num_blocks": 26476544, 00:15:12.353 "uuid": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:12.353 "assigned_rate_limits": { 00:15:12.353 "rw_ios_per_sec": 0, 00:15:12.353 "rw_mbytes_per_sec": 0, 00:15:12.353 "r_mbytes_per_sec": 0, 00:15:12.353 "w_mbytes_per_sec": 0 00:15:12.353 }, 00:15:12.353 "claimed": false, 00:15:12.353 "zoned": false, 00:15:12.353 "supported_io_types": { 00:15:12.353 "read": true, 00:15:12.353 "write": true, 00:15:12.353 "unmap": true, 00:15:12.353 "write_zeroes": true, 00:15:12.353 "flush": false, 00:15:12.353 "reset": true, 00:15:12.353 "compare": false, 00:15:12.353 "compare_and_write": false, 00:15:12.353 "abort": false, 00:15:12.353 "nvme_admin": false, 00:15:12.353 "nvme_io": false 00:15:12.353 }, 00:15:12.353 "driver_specific": { 00:15:12.353 "lvol": { 00:15:12.353 "lvol_store_uuid": "ecc5d28e-ea0a-4164-863a-ba6e3ec6752f", 00:15:12.353 "base_bdev": "nvme0n1", 00:15:12.353 "thin_provision": true, 00:15:12.353 "snapshot": false, 00:15:12.353 "clone": false, 00:15:12.353 "esnap_clone": false 00:15:12.353 } 00:15:12.353 } 00:15:12.353 } 00:15:12.353 ]' 00:15:12.353 07:30:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:12.353 07:30:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:12.353 07:30:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:12.353 07:30:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:12.353 07:30:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:12.353 07:30:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:12.353 07:30:21 -- ftl/common.sh@41 -- # local base_size=5171 00:15:12.353 07:30:21 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:12.353 07:30:21 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:12.612 07:30:21 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:12.612 07:30:21 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:12.612 07:30:21 -- ftl/common.sh@48 -- # get_bdev_size 232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.612 07:30:21 -- common/autotest_common.sh@1367 -- # local bdev_name=232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.612 07:30:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:12.612 07:30:21 -- common/autotest_common.sh@1369 -- # local bs 00:15:12.612 07:30:21 -- common/autotest_common.sh@1370 -- # local nb 00:15:12.612 07:30:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 232890e7-a508-44ef-a4e1-af348fa01467 00:15:12.870 07:30:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:12.870 { 00:15:12.870 "name": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:12.870 "aliases": [ 00:15:12.870 "lvs/nvme0n1p0" 00:15:12.870 ], 00:15:12.870 "product_name": "Logical Volume", 00:15:12.870 "block_size": 4096, 00:15:12.870 "num_blocks": 26476544, 00:15:12.870 "uuid": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:12.870 "assigned_rate_limits": { 00:15:12.870 "rw_ios_per_sec": 0, 00:15:12.870 "rw_mbytes_per_sec": 0, 00:15:12.870 "r_mbytes_per_sec": 0, 00:15:12.870 "w_mbytes_per_sec": 0 00:15:12.870 }, 00:15:12.870 "claimed": false, 00:15:12.870 "zoned": false, 00:15:12.870 "supported_io_types": { 00:15:12.870 "read": true, 00:15:12.870 "write": true, 00:15:12.870 "unmap": true, 00:15:12.870 "write_zeroes": true, 00:15:12.870 "flush": false, 00:15:12.870 "reset": true, 00:15:12.870 "compare": false, 00:15:12.870 "compare_and_write": false, 00:15:12.870 "abort": false, 00:15:12.870 "nvme_admin": false, 00:15:12.870 "nvme_io": false 00:15:12.870 }, 00:15:12.870 "driver_specific": { 00:15:12.870 "lvol": { 00:15:12.870 "lvol_store_uuid": "ecc5d28e-ea0a-4164-863a-ba6e3ec6752f", 00:15:12.870 "base_bdev": "nvme0n1", 00:15:12.870 "thin_provision": true, 00:15:12.870 "snapshot": false, 00:15:12.870 "clone": false, 00:15:12.870 "esnap_clone": false 00:15:12.870 } 00:15:12.870 } 00:15:12.870 } 00:15:12.870 ]' 00:15:12.870 07:30:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:12.870 07:30:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:12.870 07:30:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:12.870 07:30:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:12.870 07:30:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:12.870 07:30:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:12.870 07:30:22 -- ftl/common.sh@48 -- # cache_size=5171 00:15:12.870 07:30:22 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:13.128 07:30:22 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:13.128 07:30:22 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:13.128 07:30:22 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:13.128 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:13.128 07:30:22 -- ftl/fio.sh@56 -- # get_bdev_size 232890e7-a508-44ef-a4e1-af348fa01467 00:15:13.128 07:30:22 -- common/autotest_common.sh@1367 -- # local bdev_name=232890e7-a508-44ef-a4e1-af348fa01467 00:15:13.128 07:30:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:13.128 07:30:22 -- common/autotest_common.sh@1369 -- # local bs 00:15:13.128 07:30:22 -- common/autotest_common.sh@1370 -- # local nb 00:15:13.128 07:30:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 232890e7-a508-44ef-a4e1-af348fa01467 00:15:13.386 07:30:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:13.386 { 00:15:13.386 "name": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:13.386 "aliases": [ 00:15:13.386 "lvs/nvme0n1p0" 00:15:13.386 ], 00:15:13.386 "product_name": "Logical Volume", 00:15:13.386 "block_size": 4096, 00:15:13.386 "num_blocks": 26476544, 00:15:13.386 "uuid": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:13.386 "assigned_rate_limits": { 00:15:13.386 "rw_ios_per_sec": 0, 00:15:13.386 "rw_mbytes_per_sec": 0, 00:15:13.386 "r_mbytes_per_sec": 0, 00:15:13.386 "w_mbytes_per_sec": 0 00:15:13.386 }, 00:15:13.386 "claimed": false, 00:15:13.386 "zoned": false, 00:15:13.386 "supported_io_types": { 00:15:13.386 "read": true, 00:15:13.386 "write": true, 00:15:13.386 "unmap": true, 00:15:13.386 "write_zeroes": true, 00:15:13.386 "flush": false, 00:15:13.386 "reset": true, 00:15:13.386 "compare": false, 00:15:13.386 "compare_and_write": false, 00:15:13.386 "abort": false, 00:15:13.386 "nvme_admin": false, 00:15:13.386 "nvme_io": false 00:15:13.386 }, 00:15:13.386 "driver_specific": { 00:15:13.386 "lvol": { 00:15:13.386 "lvol_store_uuid": "ecc5d28e-ea0a-4164-863a-ba6e3ec6752f", 00:15:13.386 "base_bdev": "nvme0n1", 00:15:13.386 "thin_provision": true, 00:15:13.386 "snapshot": false, 00:15:13.386 "clone": false, 00:15:13.386 "esnap_clone": false 00:15:13.386 } 00:15:13.386 } 00:15:13.386 } 00:15:13.386 ]' 00:15:13.386 07:30:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:13.386 07:30:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:13.386 07:30:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:13.386 07:30:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:13.386 07:30:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:13.386 07:30:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:13.386 07:30:22 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:13.386 07:30:22 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:13.386 07:30:22 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 232890e7-a508-44ef-a4e1-af348fa01467 -c nvc0n1p0 --l2p_dram_limit 60 00:15:13.645 [2024-11-19 07:30:22.712899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.712942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:13.645 [2024-11-19 07:30:22.712955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:13.645 [2024-11-19 07:30:22.712962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.713023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.713030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:13.645 [2024-11-19 07:30:22.713038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:13.645 [2024-11-19 07:30:22.713044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.713072] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:13.645 [2024-11-19 07:30:22.713690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:13.645 [2024-11-19 07:30:22.713705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.713711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:13.645 [2024-11-19 07:30:22.713719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:15:13.645 [2024-11-19 07:30:22.713725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.713755] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 60d5f0b2-ec95-42c1-a09c-028b63288583 00:15:13.645 [2024-11-19 07:30:22.714698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.714726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:13.645 [2024-11-19 07:30:22.714734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:13.645 [2024-11-19 07:30:22.714742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.719445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.719475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:13.645 [2024-11-19 07:30:22.719482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.632 ms 00:15:13.645 [2024-11-19 07:30:22.719490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.719560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.719568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:13.645 [2024-11-19 07:30:22.719575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:13.645 [2024-11-19 07:30:22.719584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.719626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.719636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:13.645 [2024-11-19 07:30:22.719642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:13.645 [2024-11-19 07:30:22.719650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.719673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:13.645 [2024-11-19 07:30:22.722610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.722634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:13.645 [2024-11-19 07:30:22.722643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.940 ms 00:15:13.645 [2024-11-19 07:30:22.722649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.722684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-11-19 07:30:22.722691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:13.645 [2024-11-19 07:30:22.722698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:13.645 [2024-11-19 07:30:22.722704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-11-19 07:30:22.722729] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:13.645 [2024-11-19 07:30:22.722814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:13.645 [2024-11-19 07:30:22.722826] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:13.646 [2024-11-19 07:30:22.722835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:13.646 [2024-11-19 07:30:22.722844] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:13.646 [2024-11-19 07:30:22.722851] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:13.646 [2024-11-19 07:30:22.722858] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:13.646 [2024-11-19 07:30:22.722863] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:13.646 [2024-11-19 07:30:22.722874] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:13.646 [2024-11-19 07:30:22.722879] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:13.646 [2024-11-19 07:30:22.722887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.646 [2024-11-19 07:30:22.722892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:13.646 [2024-11-19 07:30:22.722899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:15:13.646 [2024-11-19 07:30:22.722904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.646 [2024-11-19 07:30:22.722958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.646 [2024-11-19 07:30:22.722965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:13.646 [2024-11-19 07:30:22.722972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:13.646 [2024-11-19 07:30:22.722977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.646 [2024-11-19 07:30:22.723048] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:13.646 [2024-11-19 07:30:22.723055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:13.646 [2024-11-19 07:30:22.723062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:13.646 [2024-11-19 07:30:22.723080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:13.646 [2024-11-19 07:30:22.723098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:13.646 [2024-11-19 07:30:22.723113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:13.646 [2024-11-19 07:30:22.723119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:13.646 [2024-11-19 07:30:22.723126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:13.646 [2024-11-19 07:30:22.723131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:13.646 [2024-11-19 07:30:22.723138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:13.646 [2024-11-19 07:30:22.723143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:13.646 [2024-11-19 07:30:22.723156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:13.646 [2024-11-19 07:30:22.723163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:13.646 [2024-11-19 07:30:22.723174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:13.646 [2024-11-19 07:30:22.723195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:13.646 [2024-11-19 07:30:22.723207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723219] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:13.646 [2024-11-19 07:30:22.723225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:13.646 [2024-11-19 07:30:22.723242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:13.646 [2024-11-19 07:30:22.723261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:13.646 [2024-11-19 07:30:22.723288] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:13.646 [2024-11-19 07:30:22.723300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:13.646 [2024-11-19 07:30:22.723307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:13.646 [2024-11-19 07:30:22.723311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:13.646 [2024-11-19 07:30:22.723317] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:13.646 [2024-11-19 07:30:22.723323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:13.646 [2024-11-19 07:30:22.723332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.646 [2024-11-19 07:30:22.723344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:13.646 [2024-11-19 07:30:22.723350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:13.646 [2024-11-19 07:30:22.723357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:13.646 [2024-11-19 07:30:22.723362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:13.646 [2024-11-19 07:30:22.723370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:13.646 [2024-11-19 07:30:22.723375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:13.646 [2024-11-19 07:30:22.723382] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:13.646 [2024-11-19 07:30:22.723389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:13.646 [2024-11-19 07:30:22.723399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:13.646 [2024-11-19 07:30:22.723404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:13.646 [2024-11-19 07:30:22.723411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:13.646 [2024-11-19 07:30:22.723416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:13.646 [2024-11-19 07:30:22.723423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:13.646 [2024-11-19 07:30:22.723428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:13.646 [2024-11-19 07:30:22.723435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:13.646 [2024-11-19 07:30:22.723440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:13.646 [2024-11-19 07:30:22.723446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:13.646 [2024-11-19 07:30:22.723452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:13.646 [2024-11-19 07:30:22.723459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:13.646 [2024-11-19 07:30:22.723465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:13.646 [2024-11-19 07:30:22.723473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:13.646 [2024-11-19 07:30:22.723479] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:13.646 [2024-11-19 07:30:22.723486] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:13.646 [2024-11-19 07:30:22.723494] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:13.646 [2024-11-19 07:30:22.723501] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:13.646 [2024-11-19 07:30:22.723506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:13.646 [2024-11-19 07:30:22.723513] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:13.646 [2024-11-19 07:30:22.723519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.646 [2024-11-19 07:30:22.723526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:13.646 [2024-11-19 07:30:22.723532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:15:13.646 [2024-11-19 07:30:22.723540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.646 [2024-11-19 07:30:22.735395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.646 [2024-11-19 07:30:22.735529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:13.646 [2024-11-19 07:30:22.735542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.804 ms 00:15:13.646 [2024-11-19 07:30:22.735549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.646 [2024-11-19 07:30:22.735626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.646 [2024-11-19 07:30:22.735635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:13.646 [2024-11-19 07:30:22.735641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:13.646 [2024-11-19 07:30:22.735648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.646 [2024-11-19 07:30:22.760569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.760598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:13.647 [2024-11-19 07:30:22.760607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:15:13.647 [2024-11-19 07:30:22.760614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.760642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.760650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:13.647 [2024-11-19 07:30:22.760657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:13.647 [2024-11-19 07:30:22.760665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.760961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.760978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:13.647 [2024-11-19 07:30:22.760984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:15:13.647 [2024-11-19 07:30:22.760991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.761098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.761108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:13.647 [2024-11-19 07:30:22.761114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:13.647 [2024-11-19 07:30:22.761121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.789448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.789481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:13.647 [2024-11-19 07:30:22.789491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.306 ms 00:15:13.647 [2024-11-19 07:30:22.789500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.798530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:13.647 [2024-11-19 07:30:22.810520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.810548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:13.647 [2024-11-19 07:30:22.810559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.931 ms 00:15:13.647 [2024-11-19 07:30:22.810565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.864194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.647 [2024-11-19 07:30:22.864230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:13.647 [2024-11-19 07:30:22.864241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.600 ms 00:15:13.647 [2024-11-19 07:30:22.864247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.647 [2024-11-19 07:30:22.864284] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:13.647 [2024-11-19 07:30:22.864293] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:16.177 [2024-11-19 07:30:25.272117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.272168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:16.177 [2024-11-19 07:30:25.272198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2407.821 ms 00:15:16.177 [2024-11-19 07:30:25.272208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.272411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.272422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:16.177 [2024-11-19 07:30:25.272433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:15:16.177 [2024-11-19 07:30:25.272440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.295335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.295481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:16.177 [2024-11-19 07:30:25.295503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.842 ms 00:15:16.177 [2024-11-19 07:30:25.295511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.318164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.318208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:16.177 [2024-11-19 07:30:25.318223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.609 ms 00:15:16.177 [2024-11-19 07:30:25.318231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.318544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.318647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:16.177 [2024-11-19 07:30:25.318664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:15:16.177 [2024-11-19 07:30:25.318672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.378215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.378251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:16.177 [2024-11-19 07:30:25.378265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.504 ms 00:15:16.177 [2024-11-19 07:30:25.378274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.402213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.402244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:16.177 [2024-11-19 07:30:25.402259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.899 ms 00:15:16.177 [2024-11-19 07:30:25.402266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.405854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.405980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:16.177 [2024-11-19 07:30:25.406001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:15:16.177 [2024-11-19 07:30:25.406010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.428777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.428894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:16.177 [2024-11-19 07:30:25.428914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.724 ms 00:15:16.177 [2024-11-19 07:30:25.428921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.428968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.428978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:16.177 [2024-11-19 07:30:25.428987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:16.177 [2024-11-19 07:30:25.428994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.429092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.177 [2024-11-19 07:30:25.429102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:16.177 [2024-11-19 07:30:25.429114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:16.177 [2024-11-19 07:30:25.429121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.177 [2024-11-19 07:30:25.429996] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2716.682 ms, result 0 00:15:16.435 { 00:15:16.435 "name": "ftl0", 00:15:16.435 "uuid": "60d5f0b2-ec95-42c1-a09c-028b63288583" 00:15:16.435 } 00:15:16.435 07:30:25 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:16.436 07:30:25 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:16.436 07:30:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.436 07:30:25 -- common/autotest_common.sh@899 -- # local i 00:15:16.436 07:30:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.436 07:30:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.436 07:30:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:16.436 07:30:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:16.694 [ 00:15:16.694 { 00:15:16.694 "name": "ftl0", 00:15:16.694 "aliases": [ 00:15:16.694 "60d5f0b2-ec95-42c1-a09c-028b63288583" 00:15:16.694 ], 00:15:16.694 "product_name": "FTL disk", 00:15:16.694 "block_size": 4096, 00:15:16.694 "num_blocks": 20971520, 00:15:16.694 "uuid": "60d5f0b2-ec95-42c1-a09c-028b63288583", 00:15:16.694 "assigned_rate_limits": { 00:15:16.694 "rw_ios_per_sec": 0, 00:15:16.694 "rw_mbytes_per_sec": 0, 00:15:16.694 "r_mbytes_per_sec": 0, 00:15:16.694 "w_mbytes_per_sec": 0 00:15:16.694 }, 00:15:16.694 "claimed": false, 00:15:16.694 "zoned": false, 00:15:16.694 "supported_io_types": { 00:15:16.694 "read": true, 00:15:16.694 "write": true, 00:15:16.694 "unmap": true, 00:15:16.694 "write_zeroes": true, 00:15:16.694 "flush": true, 00:15:16.694 "reset": false, 00:15:16.694 "compare": false, 00:15:16.694 "compare_and_write": false, 00:15:16.694 "abort": false, 00:15:16.694 "nvme_admin": false, 00:15:16.694 "nvme_io": false 00:15:16.694 }, 00:15:16.694 "driver_specific": { 00:15:16.694 "ftl": { 00:15:16.694 "base_bdev": "232890e7-a508-44ef-a4e1-af348fa01467", 00:15:16.694 "cache": "nvc0n1p0" 00:15:16.694 } 00:15:16.694 } 00:15:16.694 } 00:15:16.694 ] 00:15:16.694 07:30:25 -- common/autotest_common.sh@905 -- # return 0 00:15:16.694 07:30:25 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:16.694 07:30:25 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:16.953 07:30:26 -- ftl/fio.sh@70 -- # echo ']}' 00:15:16.953 07:30:26 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:16.953 [2024-11-19 07:30:26.182669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.953 [2024-11-19 07:30:26.182717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:16.953 [2024-11-19 07:30:26.182729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:16.953 [2024-11-19 07:30:26.182738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.953 [2024-11-19 07:30:26.182767] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:16.953 [2024-11-19 07:30:26.185285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.953 [2024-11-19 07:30:26.185326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:16.953 [2024-11-19 07:30:26.185340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:15:16.953 [2024-11-19 07:30:26.185349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.953 [2024-11-19 07:30:26.185767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.953 [2024-11-19 07:30:26.185785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:16.953 [2024-11-19 07:30:26.185795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:15:16.953 [2024-11-19 07:30:26.185802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.953 [2024-11-19 07:30:26.189040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.953 [2024-11-19 07:30:26.189064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:16.953 [2024-11-19 07:30:26.189075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:15:16.953 [2024-11-19 07:30:26.189084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.953 [2024-11-19 07:30:26.195328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.953 [2024-11-19 07:30:26.195454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:16.953 [2024-11-19 07:30:26.195474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:15:16.953 [2024-11-19 07:30:26.195482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.219426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.219460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:17.213 [2024-11-19 07:30:26.219474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.846 ms 00:15:17.213 [2024-11-19 07:30:26.219481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.234277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.234397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:17.213 [2024-11-19 07:30:26.234430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.756 ms 00:15:17.213 [2024-11-19 07:30:26.234438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.234609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.234619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:17.213 [2024-11-19 07:30:26.234631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:17.213 [2024-11-19 07:30:26.234638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.257618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.257725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:17.213 [2024-11-19 07:30:26.257742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:15:17.213 [2024-11-19 07:30:26.257749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.280408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.280513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:17.213 [2024-11-19 07:30:26.280530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.622 ms 00:15:17.213 [2024-11-19 07:30:26.280537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.303009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.303040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:17.213 [2024-11-19 07:30:26.303052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.436 ms 00:15:17.213 [2024-11-19 07:30:26.303060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.324880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.213 [2024-11-19 07:30:26.324906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:17.213 [2024-11-19 07:30:26.324916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.729 ms 00:15:17.213 [2024-11-19 07:30:26.324922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.213 [2024-11-19 07:30:26.324957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:17.213 [2024-11-19 07:30:26.324969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.324979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.324986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.324993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.324999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:17.213 [2024-11-19 07:30:26.325281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:17.214 [2024-11-19 07:30:26.325691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:17.214 [2024-11-19 07:30:26.325698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60d5f0b2-ec95-42c1-a09c-028b63288583 00:15:17.214 [2024-11-19 07:30:26.325705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:17.214 [2024-11-19 07:30:26.325712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:17.214 [2024-11-19 07:30:26.325717] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:17.214 [2024-11-19 07:30:26.325725] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:17.214 [2024-11-19 07:30:26.325730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:17.214 [2024-11-19 07:30:26.325738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:17.214 [2024-11-19 07:30:26.325744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:17.214 [2024-11-19 07:30:26.325752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:17.214 [2024-11-19 07:30:26.325756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:17.214 [2024-11-19 07:30:26.325765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.214 [2024-11-19 07:30:26.325772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:17.214 [2024-11-19 07:30:26.325780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:15:17.214 [2024-11-19 07:30:26.325785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.335594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.214 [2024-11-19 07:30:26.335618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:17.214 [2024-11-19 07:30:26.335628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.777 ms 00:15:17.214 [2024-11-19 07:30:26.335634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.335781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.214 [2024-11-19 07:30:26.335788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:17.214 [2024-11-19 07:30:26.335795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:15:17.214 [2024-11-19 07:30:26.335801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.370554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.214 [2024-11-19 07:30:26.370667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:17.214 [2024-11-19 07:30:26.370683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.214 [2024-11-19 07:30:26.370689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.370754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.214 [2024-11-19 07:30:26.370761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:17.214 [2024-11-19 07:30:26.370768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.214 [2024-11-19 07:30:26.370774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.370847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.214 [2024-11-19 07:30:26.370855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:17.214 [2024-11-19 07:30:26.370863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.214 [2024-11-19 07:30:26.370868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.370893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.214 [2024-11-19 07:30:26.370901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:17.214 [2024-11-19 07:30:26.370908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.214 [2024-11-19 07:30:26.370913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.214 [2024-11-19 07:30:26.435980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.214 [2024-11-19 07:30:26.436016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:17.215 [2024-11-19 07:30:26.436028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.436035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:17.215 [2024-11-19 07:30:26.458475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:17.215 [2024-11-19 07:30:26.458554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:17.215 [2024-11-19 07:30:26.458626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:17.215 [2024-11-19 07:30:26.458724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:17.215 [2024-11-19 07:30:26.458785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:17.215 [2024-11-19 07:30:26.458837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.458884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.215 [2024-11-19 07:30:26.458891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:17.215 [2024-11-19 07:30:26.458900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.215 [2024-11-19 07:30:26.458906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.215 [2024-11-19 07:30:26.459026] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.334 ms, result 0 00:15:17.215 true 00:15:17.473 07:30:26 -- ftl/fio.sh@75 -- # killprocess 70783 00:15:17.473 07:30:26 -- common/autotest_common.sh@936 -- # '[' -z 70783 ']' 00:15:17.473 07:30:26 -- common/autotest_common.sh@940 -- # kill -0 70783 00:15:17.473 07:30:26 -- common/autotest_common.sh@941 -- # uname 00:15:17.473 07:30:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.473 07:30:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70783 00:15:17.473 killing process with pid 70783 00:15:17.473 07:30:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:17.473 07:30:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:17.473 07:30:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70783' 00:15:17.473 07:30:26 -- common/autotest_common.sh@955 -- # kill 70783 00:15:17.473 07:30:26 -- common/autotest_common.sh@960 -- # wait 70783 00:15:27.441 07:30:35 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:27.441 07:30:35 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:27.441 07:30:35 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:27.441 07:30:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.441 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:27.441 07:30:35 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.441 07:30:35 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.441 07:30:35 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:27.441 07:30:35 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.441 07:30:35 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:27.441 07:30:35 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.441 07:30:35 -- common/autotest_common.sh@1330 -- # shift 00:15:27.441 07:30:35 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:27.441 07:30:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.441 07:30:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.441 07:30:35 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:27.441 07:30:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:27.441 07:30:35 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.441 07:30:35 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.441 07:30:35 -- common/autotest_common.sh@1336 -- # break 00:15:27.441 07:30:35 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.441 07:30:35 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:27.441 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:27.441 fio-3.35 00:15:27.441 Starting 1 thread 00:15:31.627 00:15:31.627 test: (groupid=0, jobs=1): err= 0: pid=71047: Tue Nov 19 07:30:40 2024 00:15:31.627 read: IOPS=1047, BW=69.6MiB/s (72.9MB/s)(255MiB/3659msec) 00:15:31.627 slat (nsec): min=2946, max=18515, avg=4240.92, stdev=1718.53 00:15:31.627 clat (usec): min=238, max=1761, avg=432.31, stdev=163.77 00:15:31.627 lat (usec): min=242, max=1765, avg=436.55, stdev=164.23 00:15:31.627 clat percentiles (usec): 00:15:31.627 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 310], 00:15:31.627 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 379], 60.00th=[ 449], 00:15:31.627 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 635], 95.00th=[ 824], 00:15:31.627 | 99.00th=[ 947], 99.50th=[ 1045], 99.90th=[ 1254], 99.95th=[ 1336], 00:15:31.627 | 99.99th=[ 1762] 00:15:31.627 write: IOPS=1054, BW=70.0MiB/s (73.4MB/s)(256MiB/3656msec); 0 zone resets 00:15:31.627 slat (nsec): min=13528, max=47709, avg=17962.50, stdev=2968.58 00:15:31.627 clat (usec): min=258, max=1928, avg=484.32, stdev=195.71 00:15:31.627 lat (usec): min=276, max=1947, avg=502.28, stdev=196.44 00:15:31.627 clat percentiles (usec): 00:15:31.627 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 334], 00:15:31.627 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 433], 60.00th=[ 515], 00:15:31.627 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 791], 95.00th=[ 906], 00:15:31.627 | 99.00th=[ 1057], 99.50th=[ 1418], 99.90th=[ 1713], 99.95th=[ 1844], 00:15:31.627 | 99.99th=[ 1926] 00:15:31.627 bw ( KiB/s): min=49096, max=97104, per=98.50%, avg=70642.29, stdev=19036.74, samples=7 00:15:31.627 iops : min= 722, max= 1428, avg=1038.86, stdev=279.95, samples=7 00:15:31.627 lat (usec) : 250=0.07%, 500=65.64%, 750=24.81%, 1000=8.51% 00:15:31.627 lat (msec) : 2=0.98% 00:15:31.627 cpu : usr=99.43%, sys=0.05%, ctx=6, majf=0, minf=1318 00:15:31.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.627 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.627 00:15:31.627 Run status group 0 (all jobs): 00:15:31.627 READ: bw=69.6MiB/s (72.9MB/s), 69.6MiB/s-69.6MiB/s (72.9MB/s-72.9MB/s), io=255MiB (267MB), run=3659-3659msec 00:15:31.627 WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=256MiB (269MB), run=3656-3656msec 00:15:32.562 ----------------------------------------------------- 00:15:32.562 Suppressions used: 00:15:32.562 count bytes template 00:15:32.562 1 5 /usr/src/fio/parse.c 00:15:32.562 1 8 libtcmalloc_minimal.so 00:15:32.562 1 904 libcrypto.so 00:15:32.562 ----------------------------------------------------- 00:15:32.562 00:15:32.562 07:30:41 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:32.562 07:30:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.562 07:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.562 07:30:41 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:32.562 07:30:41 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:32.562 07:30:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.562 07:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.562 07:30:41 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.562 07:30:41 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.562 07:30:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:32.562 07:30:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.562 07:30:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:32.562 07:30:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.562 07:30:41 -- common/autotest_common.sh@1330 -- # shift 00:15:32.562 07:30:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:32.562 07:30:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.562 07:30:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.562 07:30:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:32.562 07:30:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:32.563 07:30:41 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:32.563 07:30:41 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:32.563 07:30:41 -- common/autotest_common.sh@1336 -- # break 00:15:32.563 07:30:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:32.563 07:30:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:32.563 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:32.563 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:32.563 fio-3.35 00:15:32.563 Starting 2 threads 00:15:59.096 00:15:59.096 first_half: (groupid=0, jobs=1): err= 0: pid=71139: Tue Nov 19 07:31:03 2024 00:15:59.096 read: IOPS=3085, BW=12.1MiB/s (12.6MB/s)(255MiB/21142msec) 00:15:59.096 slat (nsec): min=3006, max=17248, avg=3622.33, stdev=547.99 00:15:59.096 clat (usec): min=614, max=247309, avg=31285.76, stdev=16775.01 00:15:59.096 lat (usec): min=618, max=247312, avg=31289.38, stdev=16775.06 00:15:59.096 clat percentiles (msec): 00:15:59.096 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 28], 00:15:59.096 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:15:59.096 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 41], 00:15:59.096 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 184], 99.95th=[ 209], 00:15:59.096 | 99.99th=[ 241] 00:15:59.096 write: IOPS=3639, BW=14.2MiB/s (14.9MB/s)(256MiB/18009msec); 0 zone resets 00:15:59.096 slat (usec): min=3, max=288, avg= 5.20, stdev= 2.62 00:15:59.096 clat (usec): min=334, max=82011, avg=10131.54, stdev=16665.70 00:15:59.096 lat (usec): min=342, max=82016, avg=10136.74, stdev=16665.72 00:15:59.096 clat percentiles (usec): 00:15:59.096 | 1.00th=[ 611], 5.00th=[ 725], 10.00th=[ 857], 20.00th=[ 1156], 00:15:59.096 | 30.00th=[ 2540], 40.00th=[ 3851], 50.00th=[ 4686], 60.00th=[ 5145], 00:15:59.096 | 70.00th=[ 5866], 80.00th=[10159], 90.00th=[29230], 95.00th=[58983], 00:15:59.096 | 99.00th=[66323], 99.50th=[71828], 99.90th=[77071], 99.95th=[80217], 00:15:59.096 | 99.99th=[81265] 00:15:59.096 bw ( KiB/s): min= 872, max=41760, per=78.29%, avg=22791.48, stdev=13583.77, samples=23 00:15:59.096 iops : min= 218, max=10440, avg=5697.87, stdev=3395.94, samples=23 00:15:59.096 lat (usec) : 500=0.04%, 750=3.05%, 1000=4.71% 00:15:59.096 lat (msec) : 2=5.49%, 4=7.52%, 10=21.16%, 20=4.79%, 50=47.05% 00:15:59.096 lat (msec) : 100=5.36%, 250=0.84% 00:15:59.096 cpu : usr=99.48%, sys=0.15%, ctx=28, majf=0, minf=5563 00:15:59.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:59.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.096 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.096 issued rwts: total=65239,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.096 second_half: (groupid=0, jobs=1): err= 0: pid=71140: Tue Nov 19 07:31:03 2024 00:15:59.096 read: IOPS=3101, BW=12.1MiB/s (12.7MB/s)(254MiB/21003msec) 00:15:59.096 slat (nsec): min=2994, max=16789, avg=3654.22, stdev=584.89 00:15:59.096 clat (usec): min=577, max=250831, avg=31961.38, stdev=15494.08 00:15:59.096 lat (usec): min=581, max=250835, avg=31965.03, stdev=15494.09 00:15:59.096 clat percentiles (msec): 00:15:59.096 | 1.00th=[ 4], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:15:59.096 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:15:59.096 | 70.00th=[ 30], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 44], 00:15:59.096 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 163], 00:15:59.096 | 99.99th=[ 247] 00:15:59.096 write: IOPS=4407, BW=17.2MiB/s (18.1MB/s)(256MiB/14869msec); 0 zone resets 00:15:59.096 slat (usec): min=3, max=221, avg= 5.24, stdev= 2.20 00:15:59.096 clat (usec): min=361, max=82175, avg=9235.92, stdev=16346.07 00:15:59.096 lat (usec): min=368, max=82190, avg=9241.15, stdev=16346.07 00:15:59.096 clat percentiles (usec): 00:15:59.096 | 1.00th=[ 635], 5.00th=[ 742], 10.00th=[ 865], 20.00th=[ 1037], 00:15:59.096 | 30.00th=[ 1303], 40.00th=[ 2507], 50.00th=[ 3654], 60.00th=[ 4883], 00:15:59.096 | 70.00th=[ 6259], 80.00th=[ 9765], 90.00th=[13960], 95.00th=[58459], 00:15:59.096 | 99.00th=[66323], 99.50th=[69731], 99.90th=[78119], 99.95th=[81265], 00:15:59.096 | 99.99th=[81265] 00:15:59.096 bw ( KiB/s): min= 1040, max=47912, per=100.00%, avg=32768.00, stdev=12849.76, samples=16 00:15:59.096 iops : min= 260, max=11978, avg=8192.00, stdev=3212.44, samples=16 00:15:59.096 lat (usec) : 500=0.02%, 750=2.67%, 1000=6.39% 00:15:59.096 lat (msec) : 2=8.78%, 4=8.89%, 10=14.68%, 20=5.42%, 50=46.68% 00:15:59.096 lat (msec) : 100=5.64%, 250=0.82%, 500=0.01% 00:15:59.096 cpu : usr=99.52%, sys=0.10%, ctx=28, majf=0, minf=5548 00:15:59.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:59.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.096 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.096 issued rwts: total=65144,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.096 00:15:59.096 Run status group 0 (all jobs): 00:15:59.096 READ: bw=24.1MiB/s (25.3MB/s), 12.1MiB/s-12.1MiB/s (12.6MB/s-12.7MB/s), io=509MiB (534MB), run=21003-21142msec 00:15:59.096 WRITE: bw=28.4MiB/s (29.8MB/s), 14.2MiB/s-17.2MiB/s (14.9MB/s-18.1MB/s), io=512MiB (537MB), run=14869-18009msec 00:15:59.096 ----------------------------------------------------- 00:15:59.096 Suppressions used: 00:15:59.096 count bytes template 00:15:59.096 2 10 /usr/src/fio/parse.c 00:15:59.096 2 192 /usr/src/fio/iolog.c 00:15:59.096 1 8 libtcmalloc_minimal.so 00:15:59.096 1 904 libcrypto.so 00:15:59.096 ----------------------------------------------------- 00:15:59.096 00:15:59.096 07:31:06 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:59.096 07:31:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:59.096 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:15:59.096 07:31:06 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:59.096 07:31:06 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:59.096 07:31:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.096 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:15:59.096 07:31:06 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:59.096 07:31:06 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:59.096 07:31:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:59.096 07:31:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.096 07:31:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:59.096 07:31:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.096 07:31:06 -- common/autotest_common.sh@1330 -- # shift 00:15:59.096 07:31:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:59.096 07:31:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.096 07:31:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.096 07:31:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:59.096 07:31:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:59.096 07:31:06 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:59.096 07:31:06 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:59.096 07:31:06 -- common/autotest_common.sh@1336 -- # break 00:15:59.096 07:31:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:59.096 07:31:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:59.096 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:59.096 fio-3.35 00:15:59.096 Starting 1 thread 00:16:11.330 00:16:11.330 test: (groupid=0, jobs=1): err= 0: pid=71434: Tue Nov 19 07:31:19 2024 00:16:11.330 read: IOPS=8606, BW=33.6MiB/s (35.3MB/s)(255MiB/7576msec) 00:16:11.330 slat (nsec): min=2978, max=16316, avg=3503.45, stdev=554.68 00:16:11.330 clat (usec): min=489, max=28916, avg=14865.62, stdev=1621.19 00:16:11.330 lat (usec): min=493, max=28920, avg=14869.13, stdev=1621.20 00:16:11.330 clat percentiles (usec): 00:16:11.330 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13960], 20.00th=[14091], 00:16:11.330 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:16:11.330 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15401], 95.00th=[17695], 00:16:11.330 | 99.00th=[22938], 99.50th=[23725], 99.90th=[24511], 99.95th=[25297], 00:16:11.330 | 99.99th=[28181] 00:16:11.330 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(256MiB/4913msec); 0 zone resets 00:16:11.330 slat (usec): min=4, max=397, avg= 5.75, stdev= 2.58 00:16:11.330 clat (usec): min=434, max=43939, avg=9555.97, stdev=9763.49 00:16:11.330 lat (usec): min=440, max=43945, avg=9561.71, stdev=9763.50 00:16:11.330 clat percentiles (usec): 00:16:11.330 | 1.00th=[ 644], 5.00th=[ 783], 10.00th=[ 865], 20.00th=[ 979], 00:16:11.330 | 30.00th=[ 1106], 40.00th=[ 1434], 50.00th=[ 7963], 60.00th=[10159], 00:16:11.330 | 70.00th=[12125], 80.00th=[14877], 90.00th=[28181], 95.00th=[29492], 00:16:11.330 | 99.00th=[32375], 99.50th=[35390], 99.90th=[38011], 99.95th=[38536], 00:16:11.330 | 99.99th=[42730] 00:16:11.330 bw ( KiB/s): min=39736, max=61432, per=98.24%, avg=52420.60, stdev=8020.87, samples=10 00:16:11.330 iops : min= 9934, max=15358, avg=13105.10, stdev=2005.30, samples=10 00:16:11.330 lat (usec) : 500=0.01%, 750=1.89%, 1000=9.05% 00:16:11.330 lat (msec) : 2=9.71%, 4=0.52%, 10=8.59%, 20=60.73%, 50=9.50% 00:16:11.330 cpu : usr=99.38%, sys=0.17%, ctx=22, majf=0, minf=5567 00:16:11.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:11.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.330 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.330 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.330 00:16:11.330 Run status group 0 (all jobs): 00:16:11.330 READ: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=255MiB (267MB), run=7576-7576msec 00:16:11.330 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=256MiB (268MB), run=4913-4913msec 00:16:12.270 ----------------------------------------------------- 00:16:12.270 Suppressions used: 00:16:12.270 count bytes template 00:16:12.270 1 5 /usr/src/fio/parse.c 00:16:12.270 2 192 /usr/src/fio/iolog.c 00:16:12.270 1 8 libtcmalloc_minimal.so 00:16:12.270 1 904 libcrypto.so 00:16:12.270 ----------------------------------------------------- 00:16:12.270 00:16:12.270 07:31:21 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:12.270 07:31:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.270 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.270 07:31:21 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:12.270 Remove shared memory files 00:16:12.270 07:31:21 -- ftl/fio.sh@85 -- # remove_shm 00:16:12.270 07:31:21 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:12.270 07:31:21 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:12.270 07:31:21 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:12.270 07:31:21 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56187 /dev/shm/spdk_tgt_trace.pid69695 00:16:12.270 07:31:21 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:12.270 07:31:21 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:12.270 ************************************ 00:16:12.270 END TEST ftl_fio_basic 00:16:12.270 ************************************ 00:16:12.270 00:16:12.270 real 1m2.908s 00:16:12.270 user 2m13.719s 00:16:12.270 sys 0m2.620s 00:16:12.270 07:31:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.270 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.529 07:31:21 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:12.529 07:31:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:12.529 07:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.529 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.529 ************************************ 00:16:12.529 START TEST ftl_bdevperf 00:16:12.529 ************************************ 00:16:12.529 07:31:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:12.529 * Looking for test storage... 00:16:12.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.529 07:31:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.529 07:31:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.529 07:31:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.529 07:31:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.529 07:31:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.529 07:31:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.529 07:31:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.529 07:31:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.529 07:31:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.529 07:31:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.529 07:31:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.529 07:31:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.529 07:31:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.529 07:31:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.529 07:31:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.529 07:31:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.529 07:31:21 -- scripts/common.sh@344 -- # : 1 00:16:12.529 07:31:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.529 07:31:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.529 07:31:21 -- scripts/common.sh@364 -- # decimal 1 00:16:12.530 07:31:21 -- scripts/common.sh@352 -- # local d=1 00:16:12.530 07:31:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.530 07:31:21 -- scripts/common.sh@354 -- # echo 1 00:16:12.530 07:31:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.530 07:31:21 -- scripts/common.sh@365 -- # decimal 2 00:16:12.530 07:31:21 -- scripts/common.sh@352 -- # local d=2 00:16:12.530 07:31:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.530 07:31:21 -- scripts/common.sh@354 -- # echo 2 00:16:12.530 07:31:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.530 07:31:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.530 07:31:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.530 07:31:21 -- scripts/common.sh@367 -- # return 0 00:16:12.530 07:31:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.530 07:31:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.530 --rc genhtml_branch_coverage=1 00:16:12.530 --rc genhtml_function_coverage=1 00:16:12.530 --rc genhtml_legend=1 00:16:12.530 --rc geninfo_all_blocks=1 00:16:12.530 --rc geninfo_unexecuted_blocks=1 00:16:12.530 00:16:12.530 ' 00:16:12.530 07:31:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.530 --rc genhtml_branch_coverage=1 00:16:12.530 --rc genhtml_function_coverage=1 00:16:12.530 --rc genhtml_legend=1 00:16:12.530 --rc geninfo_all_blocks=1 00:16:12.530 --rc geninfo_unexecuted_blocks=1 00:16:12.530 00:16:12.530 ' 00:16:12.530 07:31:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.530 --rc genhtml_branch_coverage=1 00:16:12.530 --rc genhtml_function_coverage=1 00:16:12.530 --rc genhtml_legend=1 00:16:12.530 --rc geninfo_all_blocks=1 00:16:12.530 --rc geninfo_unexecuted_blocks=1 00:16:12.530 00:16:12.530 ' 00:16:12.530 07:31:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.530 --rc genhtml_branch_coverage=1 00:16:12.530 --rc genhtml_function_coverage=1 00:16:12.530 --rc genhtml_legend=1 00:16:12.530 --rc geninfo_all_blocks=1 00:16:12.530 --rc geninfo_unexecuted_blocks=1 00:16:12.530 00:16:12.530 ' 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:12.530 07:31:21 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:12.530 07:31:21 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.530 07:31:21 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.530 07:31:21 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:12.530 07:31:21 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:12.530 07:31:21 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.530 07:31:21 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:12.530 07:31:21 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:12.530 07:31:21 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.530 07:31:21 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.530 07:31:21 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:12.530 07:31:21 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:12.530 07:31:21 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.530 07:31:21 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.530 07:31:21 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:12.530 07:31:21 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:12.530 07:31:21 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.530 07:31:21 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.530 07:31:21 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:12.530 07:31:21 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:12.530 07:31:21 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.530 07:31:21 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.530 07:31:21 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.530 07:31:21 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.530 07:31:21 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:12.530 07:31:21 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:12.530 07:31:21 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.530 07:31:21 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@13 -- # use_append= 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:12.530 07:31:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.530 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71668 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@22 -- # waitforlisten 71668 00:16:12.530 07:31:21 -- common/autotest_common.sh@829 -- # '[' -z 71668 ']' 00:16:12.530 07:31:21 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:12.530 07:31:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.530 07:31:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.530 07:31:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.530 07:31:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.530 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.530 [2024-11-19 07:31:21.760436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.530 [2024-11-19 07:31:21.760696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71668 ] 00:16:12.788 [2024-11-19 07:31:21.910020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.046 [2024-11-19 07:31:22.088718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.613 07:31:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.613 07:31:22 -- common/autotest_common.sh@862 -- # return 0 00:16:13.613 07:31:22 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:13.613 07:31:22 -- ftl/common.sh@54 -- # local name=nvme0 00:16:13.613 07:31:22 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:13.613 07:31:22 -- ftl/common.sh@56 -- # local size=103424 00:16:13.613 07:31:22 -- ftl/common.sh@59 -- # local base_bdev 00:16:13.613 07:31:22 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:13.613 07:31:22 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:13.613 07:31:22 -- ftl/common.sh@62 -- # local base_size 00:16:13.613 07:31:22 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:13.613 07:31:22 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:13.613 07:31:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:13.613 07:31:22 -- common/autotest_common.sh@1369 -- # local bs 00:16:13.613 07:31:22 -- common/autotest_common.sh@1370 -- # local nb 00:16:13.613 07:31:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:13.872 07:31:23 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:13.872 { 00:16:13.872 "name": "nvme0n1", 00:16:13.872 "aliases": [ 00:16:13.872 "b35268cc-92a6-44d0-beed-5bea3c6b3bac" 00:16:13.872 ], 00:16:13.872 "product_name": "NVMe disk", 00:16:13.872 "block_size": 4096, 00:16:13.872 "num_blocks": 1310720, 00:16:13.872 "uuid": "b35268cc-92a6-44d0-beed-5bea3c6b3bac", 00:16:13.872 "assigned_rate_limits": { 00:16:13.872 "rw_ios_per_sec": 0, 00:16:13.872 "rw_mbytes_per_sec": 0, 00:16:13.872 "r_mbytes_per_sec": 0, 00:16:13.872 "w_mbytes_per_sec": 0 00:16:13.872 }, 00:16:13.872 "claimed": true, 00:16:13.872 "claim_type": "read_many_write_one", 00:16:13.872 "zoned": false, 00:16:13.872 "supported_io_types": { 00:16:13.872 "read": true, 00:16:13.872 "write": true, 00:16:13.872 "unmap": true, 00:16:13.872 "write_zeroes": true, 00:16:13.872 "flush": true, 00:16:13.872 "reset": true, 00:16:13.872 "compare": true, 00:16:13.872 "compare_and_write": false, 00:16:13.872 "abort": true, 00:16:13.872 "nvme_admin": true, 00:16:13.872 "nvme_io": true 00:16:13.872 }, 00:16:13.872 "driver_specific": { 00:16:13.872 "nvme": [ 00:16:13.872 { 00:16:13.872 "pci_address": "0000:00:07.0", 00:16:13.872 "trid": { 00:16:13.872 "trtype": "PCIe", 00:16:13.872 "traddr": "0000:00:07.0" 00:16:13.872 }, 00:16:13.872 "ctrlr_data": { 00:16:13.872 "cntlid": 0, 00:16:13.872 "vendor_id": "0x1b36", 00:16:13.872 "model_number": "QEMU NVMe Ctrl", 00:16:13.872 "serial_number": "12341", 00:16:13.872 "firmware_revision": "8.0.0", 00:16:13.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:13.872 "oacs": { 00:16:13.872 "security": 0, 00:16:13.872 "format": 1, 00:16:13.872 "firmware": 0, 00:16:13.872 "ns_manage": 1 00:16:13.872 }, 00:16:13.872 "multi_ctrlr": false, 00:16:13.872 "ana_reporting": false 00:16:13.872 }, 00:16:13.872 "vs": { 00:16:13.872 "nvme_version": "1.4" 00:16:13.872 }, 00:16:13.872 "ns_data": { 00:16:13.872 "id": 1, 00:16:13.872 "can_share": false 00:16:13.872 } 00:16:13.872 } 00:16:13.872 ], 00:16:13.872 "mp_policy": "active_passive" 00:16:13.872 } 00:16:13.872 } 00:16:13.872 ]' 00:16:13.872 07:31:23 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:13.872 07:31:23 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:13.872 07:31:23 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:13.872 07:31:23 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:13.872 07:31:23 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:13.872 07:31:23 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:13.872 07:31:23 -- ftl/common.sh@63 -- # base_size=5120 00:16:13.872 07:31:23 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:13.872 07:31:23 -- ftl/common.sh@67 -- # clear_lvols 00:16:13.872 07:31:23 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:13.872 07:31:23 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:14.131 07:31:23 -- ftl/common.sh@28 -- # stores=ecc5d28e-ea0a-4164-863a-ba6e3ec6752f 00:16:14.131 07:31:23 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:14.131 07:31:23 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecc5d28e-ea0a-4164-863a-ba6e3ec6752f 00:16:14.391 07:31:23 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:14.651 07:31:23 -- ftl/common.sh@68 -- # lvs=0c90456f-b674-4bf9-8bee-6ab97c1417fc 00:16:14.651 07:31:23 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c90456f-b674-4bf9-8bee-6ab97c1417fc 00:16:14.651 07:31:23 -- ftl/bdevperf.sh@23 -- # split_bdev=316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.651 07:31:23 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.651 07:31:23 -- ftl/common.sh@35 -- # local name=nvc0 00:16:14.651 07:31:23 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:14.651 07:31:23 -- ftl/common.sh@37 -- # local base_bdev=316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.651 07:31:23 -- ftl/common.sh@38 -- # local cache_size= 00:16:14.651 07:31:23 -- ftl/common.sh@41 -- # get_bdev_size 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.651 07:31:23 -- common/autotest_common.sh@1367 -- # local bdev_name=316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.651 07:31:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:14.651 07:31:23 -- common/autotest_common.sh@1369 -- # local bs 00:16:14.651 07:31:23 -- common/autotest_common.sh@1370 -- # local nb 00:16:14.651 07:31:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:14.910 07:31:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:14.910 { 00:16:14.910 "name": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:14.910 "aliases": [ 00:16:14.910 "lvs/nvme0n1p0" 00:16:14.910 ], 00:16:14.910 "product_name": "Logical Volume", 00:16:14.910 "block_size": 4096, 00:16:14.910 "num_blocks": 26476544, 00:16:14.910 "uuid": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:14.910 "assigned_rate_limits": { 00:16:14.910 "rw_ios_per_sec": 0, 00:16:14.910 "rw_mbytes_per_sec": 0, 00:16:14.910 "r_mbytes_per_sec": 0, 00:16:14.910 "w_mbytes_per_sec": 0 00:16:14.910 }, 00:16:14.910 "claimed": false, 00:16:14.910 "zoned": false, 00:16:14.910 "supported_io_types": { 00:16:14.910 "read": true, 00:16:14.910 "write": true, 00:16:14.910 "unmap": true, 00:16:14.910 "write_zeroes": true, 00:16:14.910 "flush": false, 00:16:14.910 "reset": true, 00:16:14.910 "compare": false, 00:16:14.910 "compare_and_write": false, 00:16:14.910 "abort": false, 00:16:14.910 "nvme_admin": false, 00:16:14.910 "nvme_io": false 00:16:14.910 }, 00:16:14.910 "driver_specific": { 00:16:14.910 "lvol": { 00:16:14.910 "lvol_store_uuid": "0c90456f-b674-4bf9-8bee-6ab97c1417fc", 00:16:14.910 "base_bdev": "nvme0n1", 00:16:14.910 "thin_provision": true, 00:16:14.910 "snapshot": false, 00:16:14.910 "clone": false, 00:16:14.910 "esnap_clone": false 00:16:14.910 } 00:16:14.910 } 00:16:14.910 } 00:16:14.910 ]' 00:16:14.910 07:31:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:14.910 07:31:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:14.910 07:31:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:14.910 07:31:24 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:14.910 07:31:24 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:14.910 07:31:24 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:14.910 07:31:24 -- ftl/common.sh@41 -- # local base_size=5171 00:16:14.910 07:31:24 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:14.910 07:31:24 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:15.169 07:31:24 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:15.169 07:31:24 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:15.169 07:31:24 -- ftl/common.sh@48 -- # get_bdev_size 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.169 07:31:24 -- common/autotest_common.sh@1367 -- # local bdev_name=316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.169 07:31:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:15.169 07:31:24 -- common/autotest_common.sh@1369 -- # local bs 00:16:15.169 07:31:24 -- common/autotest_common.sh@1370 -- # local nb 00:16:15.169 07:31:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.428 07:31:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:15.428 { 00:16:15.428 "name": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:15.428 "aliases": [ 00:16:15.428 "lvs/nvme0n1p0" 00:16:15.428 ], 00:16:15.428 "product_name": "Logical Volume", 00:16:15.428 "block_size": 4096, 00:16:15.428 "num_blocks": 26476544, 00:16:15.428 "uuid": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:15.428 "assigned_rate_limits": { 00:16:15.428 "rw_ios_per_sec": 0, 00:16:15.428 "rw_mbytes_per_sec": 0, 00:16:15.428 "r_mbytes_per_sec": 0, 00:16:15.428 "w_mbytes_per_sec": 0 00:16:15.428 }, 00:16:15.428 "claimed": false, 00:16:15.428 "zoned": false, 00:16:15.428 "supported_io_types": { 00:16:15.428 "read": true, 00:16:15.428 "write": true, 00:16:15.428 "unmap": true, 00:16:15.428 "write_zeroes": true, 00:16:15.428 "flush": false, 00:16:15.428 "reset": true, 00:16:15.428 "compare": false, 00:16:15.428 "compare_and_write": false, 00:16:15.428 "abort": false, 00:16:15.428 "nvme_admin": false, 00:16:15.428 "nvme_io": false 00:16:15.428 }, 00:16:15.428 "driver_specific": { 00:16:15.428 "lvol": { 00:16:15.428 "lvol_store_uuid": "0c90456f-b674-4bf9-8bee-6ab97c1417fc", 00:16:15.428 "base_bdev": "nvme0n1", 00:16:15.428 "thin_provision": true, 00:16:15.428 "snapshot": false, 00:16:15.428 "clone": false, 00:16:15.428 "esnap_clone": false 00:16:15.428 } 00:16:15.428 } 00:16:15.428 } 00:16:15.428 ]' 00:16:15.428 07:31:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:15.428 07:31:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:15.428 07:31:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:15.428 07:31:24 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:15.428 07:31:24 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:15.428 07:31:24 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:15.428 07:31:24 -- ftl/common.sh@48 -- # cache_size=5171 00:16:15.428 07:31:24 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:15.687 07:31:24 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:15.687 07:31:24 -- ftl/bdevperf.sh@26 -- # get_bdev_size 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.687 07:31:24 -- common/autotest_common.sh@1367 -- # local bdev_name=316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.687 07:31:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:15.687 07:31:24 -- common/autotest_common.sh@1369 -- # local bs 00:16:15.687 07:31:24 -- common/autotest_common.sh@1370 -- # local nb 00:16:15.687 07:31:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316aee7a-6648-4a08-b1c8-0ab335317c31 00:16:15.946 07:31:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:15.946 { 00:16:15.946 "name": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:15.946 "aliases": [ 00:16:15.946 "lvs/nvme0n1p0" 00:16:15.946 ], 00:16:15.946 "product_name": "Logical Volume", 00:16:15.946 "block_size": 4096, 00:16:15.946 "num_blocks": 26476544, 00:16:15.946 "uuid": "316aee7a-6648-4a08-b1c8-0ab335317c31", 00:16:15.946 "assigned_rate_limits": { 00:16:15.946 "rw_ios_per_sec": 0, 00:16:15.946 "rw_mbytes_per_sec": 0, 00:16:15.946 "r_mbytes_per_sec": 0, 00:16:15.946 "w_mbytes_per_sec": 0 00:16:15.946 }, 00:16:15.946 "claimed": false, 00:16:15.946 "zoned": false, 00:16:15.946 "supported_io_types": { 00:16:15.946 "read": true, 00:16:15.946 "write": true, 00:16:15.946 "unmap": true, 00:16:15.946 "write_zeroes": true, 00:16:15.946 "flush": false, 00:16:15.946 "reset": true, 00:16:15.946 "compare": false, 00:16:15.946 "compare_and_write": false, 00:16:15.946 "abort": false, 00:16:15.946 "nvme_admin": false, 00:16:15.946 "nvme_io": false 00:16:15.946 }, 00:16:15.946 "driver_specific": { 00:16:15.946 "lvol": { 00:16:15.946 "lvol_store_uuid": "0c90456f-b674-4bf9-8bee-6ab97c1417fc", 00:16:15.946 "base_bdev": "nvme0n1", 00:16:15.946 "thin_provision": true, 00:16:15.946 "snapshot": false, 00:16:15.946 "clone": false, 00:16:15.946 "esnap_clone": false 00:16:15.946 } 00:16:15.946 } 00:16:15.946 } 00:16:15.946 ]' 00:16:15.946 07:31:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:15.946 07:31:25 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:15.946 07:31:25 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:15.946 07:31:25 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:15.946 07:31:25 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:15.946 07:31:25 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:15.946 07:31:25 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:15.946 07:31:25 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 316aee7a-6648-4a08-b1c8-0ab335317c31 -c nvc0n1p0 --l2p_dram_limit 20 00:16:16.206 [2024-11-19 07:31:25.207977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.208021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:16.206 [2024-11-19 07:31:25.208034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.206 [2024-11-19 07:31:25.208040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.208082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.208089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.206 [2024-11-19 07:31:25.208097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:16.206 [2024-11-19 07:31:25.208102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.208116] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:16.206 [2024-11-19 07:31:25.208738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:16.206 [2024-11-19 07:31:25.208757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.208763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.206 [2024-11-19 07:31:25.208771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:16:16.206 [2024-11-19 07:31:25.208777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.208828] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0df7cb8c-adb3-4018-82d4-46b725b7a019 00:16:16.206 [2024-11-19 07:31:25.209771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.209789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:16.206 [2024-11-19 07:31:25.209798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:16.206 [2024-11-19 07:31:25.209805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.214531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.214562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.206 [2024-11-19 07:31:25.214569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:16:16.206 [2024-11-19 07:31:25.214576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.214644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.214652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.206 [2024-11-19 07:31:25.214659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:16.206 [2024-11-19 07:31:25.214668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.214707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.214715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:16.206 [2024-11-19 07:31:25.214723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:16.206 [2024-11-19 07:31:25.214730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.214745] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.206 [2024-11-19 07:31:25.217710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.217734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.206 [2024-11-19 07:31:25.217742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:16:16.206 [2024-11-19 07:31:25.217748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.217774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.217781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:16.206 [2024-11-19 07:31:25.217788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:16.206 [2024-11-19 07:31:25.217793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.206 [2024-11-19 07:31:25.217805] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:16.206 [2024-11-19 07:31:25.217894] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:16.206 [2024-11-19 07:31:25.217906] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:16.206 [2024-11-19 07:31:25.217914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:16.206 [2024-11-19 07:31:25.217923] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:16.206 [2024-11-19 07:31:25.217929] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:16.206 [2024-11-19 07:31:25.217937] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:16.206 [2024-11-19 07:31:25.217942] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:16.206 [2024-11-19 07:31:25.217952] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:16.206 [2024-11-19 07:31:25.217957] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:16.206 [2024-11-19 07:31:25.217964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.206 [2024-11-19 07:31:25.217969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:16.207 [2024-11-19 07:31:25.217976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:16:16.207 [2024-11-19 07:31:25.217981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.207 [2024-11-19 07:31:25.218027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.207 [2024-11-19 07:31:25.218034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:16.207 [2024-11-19 07:31:25.218040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:16.207 [2024-11-19 07:31:25.218046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.207 [2024-11-19 07:31:25.218099] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:16.207 [2024-11-19 07:31:25.218106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:16.207 [2024-11-19 07:31:25.218113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:16.207 [2024-11-19 07:31:25.218136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:16.207 [2024-11-19 07:31:25.218154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.207 [2024-11-19 07:31:25.218166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:16.207 [2024-11-19 07:31:25.218172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:16.207 [2024-11-19 07:31:25.218194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.207 [2024-11-19 07:31:25.218201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:16.207 [2024-11-19 07:31:25.218207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:16.207 [2024-11-19 07:31:25.218212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:16.207 [2024-11-19 07:31:25.218226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:16.207 [2024-11-19 07:31:25.218232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:16.207 [2024-11-19 07:31:25.218243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:16.207 [2024-11-19 07:31:25.218248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:16.207 [2024-11-19 07:31:25.218260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:16.207 [2024-11-19 07:31:25.218278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:16.207 [2024-11-19 07:31:25.218293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:16.207 [2024-11-19 07:31:25.218312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:16.207 [2024-11-19 07:31:25.218328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.207 [2024-11-19 07:31:25.218339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:16.207 [2024-11-19 07:31:25.218346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:16.207 [2024-11-19 07:31:25.218350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.207 [2024-11-19 07:31:25.218356] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:16.207 [2024-11-19 07:31:25.218363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:16.207 [2024-11-19 07:31:25.218370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.207 [2024-11-19 07:31:25.218387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:16.207 [2024-11-19 07:31:25.218393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:16.207 [2024-11-19 07:31:25.218398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:16.207 [2024-11-19 07:31:25.218403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:16.207 [2024-11-19 07:31:25.218411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:16.207 [2024-11-19 07:31:25.218416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:16.207 [2024-11-19 07:31:25.218422] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:16.207 [2024-11-19 07:31:25.218429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.207 [2024-11-19 07:31:25.218439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:16.207 [2024-11-19 07:31:25.218444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:16.207 [2024-11-19 07:31:25.218451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:16.207 [2024-11-19 07:31:25.218456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:16.207 [2024-11-19 07:31:25.218462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:16.207 [2024-11-19 07:31:25.218468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:16.207 [2024-11-19 07:31:25.218474] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:16.207 [2024-11-19 07:31:25.218479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:16.207 [2024-11-19 07:31:25.218486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:16.207 [2024-11-19 07:31:25.218491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:16.207 [2024-11-19 07:31:25.218499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:16.207 [2024-11-19 07:31:25.218504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:16.207 [2024-11-19 07:31:25.218513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:16.207 [2024-11-19 07:31:25.218519] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:16.207 [2024-11-19 07:31:25.218527] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.207 [2024-11-19 07:31:25.218532] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:16.207 [2024-11-19 07:31:25.218539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:16.207 [2024-11-19 07:31:25.218544] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:16.207 [2024-11-19 07:31:25.218551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:16.207 [2024-11-19 07:31:25.218556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.207 [2024-11-19 07:31:25.218563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:16.207 [2024-11-19 07:31:25.218569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:16:16.207 [2024-11-19 07:31:25.218575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.207 [2024-11-19 07:31:25.230482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.207 [2024-11-19 07:31:25.230598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.207 [2024-11-19 07:31:25.230610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.871 ms 00:16:16.207 [2024-11-19 07:31:25.230617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.207 [2024-11-19 07:31:25.230684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.207 [2024-11-19 07:31:25.230693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:16.207 [2024-11-19 07:31:25.230698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:16.207 [2024-11-19 07:31:25.230705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.207 [2024-11-19 07:31:25.271368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.207 [2024-11-19 07:31:25.271404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.208 [2024-11-19 07:31:25.271414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.629 ms 00:16:16.208 [2024-11-19 07:31:25.271422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.271450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.271461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.208 [2024-11-19 07:31:25.271468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:16.208 [2024-11-19 07:31:25.271475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.271792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.271807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.208 [2024-11-19 07:31:25.271814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:16.208 [2024-11-19 07:31:25.271822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.271905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.271914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.208 [2024-11-19 07:31:25.271926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:16.208 [2024-11-19 07:31:25.271933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.283109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.283241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.208 [2024-11-19 07:31:25.283256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.164 ms 00:16:16.208 [2024-11-19 07:31:25.283263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.292290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:16.208 [2024-11-19 07:31:25.296497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.296522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:16.208 [2024-11-19 07:31:25.296532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.173 ms 00:16:16.208 [2024-11-19 07:31:25.296538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.358852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.208 [2024-11-19 07:31:25.358972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:16.208 [2024-11-19 07:31:25.358988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.293 ms 00:16:16.208 [2024-11-19 07:31:25.358994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.208 [2024-11-19 07:31:25.359023] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:16.208 [2024-11-19 07:31:25.359033] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:19.500 [2024-11-19 07:31:28.078358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.078443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:19.500 [2024-11-19 07:31:28.078465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2719.307 ms 00:16:19.500 [2024-11-19 07:31:28.078475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.078699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.078712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:19.500 [2024-11-19 07:31:28.078725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:16:19.500 [2024-11-19 07:31:28.078733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.106330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.106570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:19.500 [2024-11-19 07:31:28.106602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.535 ms 00:16:19.500 [2024-11-19 07:31:28.106615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.132191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.132245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:19.500 [2024-11-19 07:31:28.132266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:16:19.500 [2024-11-19 07:31:28.132274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.132629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.132650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:19.500 [2024-11-19 07:31:28.132662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:16:19.500 [2024-11-19 07:31:28.132671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.206658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.206715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:19.500 [2024-11-19 07:31:28.206733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.940 ms 00:16:19.500 [2024-11-19 07:31:28.206742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.235295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.235353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:19.500 [2024-11-19 07:31:28.235369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.487 ms 00:16:19.500 [2024-11-19 07:31:28.235378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.236874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.236927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:19.500 [2024-11-19 07:31:28.236943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:16:19.500 [2024-11-19 07:31:28.236954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.263895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.263951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:19.500 [2024-11-19 07:31:28.263968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.873 ms 00:16:19.500 [2024-11-19 07:31:28.263975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.264033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.264043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:19.500 [2024-11-19 07:31:28.264057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:19.500 [2024-11-19 07:31:28.264065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.264162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.500 [2024-11-19 07:31:28.264173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:19.500 [2024-11-19 07:31:28.264211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:19.500 [2024-11-19 07:31:28.264219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.500 [2024-11-19 07:31:28.265421] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3056.871 ms, result 0 00:16:19.500 { 00:16:19.500 "name": "ftl0", 00:16:19.500 "uuid": "0df7cb8c-adb3-4018-82d4-46b725b7a019" 00:16:19.500 } 00:16:19.500 07:31:28 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:19.500 07:31:28 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:19.500 07:31:28 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:19.500 07:31:28 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:19.500 [2024-11-19 07:31:28.581564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:19.500 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:19.500 Zero copy mechanism will not be used. 00:16:19.500 Running I/O for 4 seconds... 00:16:23.776 00:16:23.776 Latency(us) 00:16:23.776 [2024-11-19T07:31:33.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.776 [2024-11-19T07:31:33.026Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:23.776 ftl0 : 4.00 1750.05 116.21 0.00 0.00 597.76 151.24 3503.66 00:16:23.776 [2024-11-19T07:31:33.026Z] =================================================================================================================== 00:16:23.776 [2024-11-19T07:31:33.026Z] Total : 1750.05 116.21 0.00 0.00 597.76 151.24 3503.66 00:16:23.776 0 00:16:23.776 [2024-11-19 07:31:32.591551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:23.776 07:31:32 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:23.776 [2024-11-19 07:31:32.706775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:23.776 Running I/O for 4 seconds... 00:16:27.964 00:16:27.964 Latency(us) 00:16:27.964 [2024-11-19T07:31:37.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.964 [2024-11-19T07:31:37.214Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.964 ftl0 : 4.03 6887.43 26.90 0.00 0.00 18498.26 177.23 49000.76 00:16:27.964 [2024-11-19T07:31:37.214Z] =================================================================================================================== 00:16:27.964 [2024-11-19T07:31:37.214Z] Total : 6887.43 26.90 0.00 0.00 18498.26 0.00 49000.76 00:16:27.964 [2024-11-19 07:31:36.749497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:27.964 0 00:16:27.964 07:31:36 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:27.964 [2024-11-19 07:31:36.865223] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:27.964 Running I/O for 4 seconds... 00:16:32.148 00:16:32.148 Latency(us) 00:16:32.148 [2024-11-19T07:31:41.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.148 [2024-11-19T07:31:41.398Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.148 Verification LBA range: start 0x0 length 0x1400000 00:16:32.148 ftl0 : 4.00 9228.54 36.05 0.00 0.00 13841.73 160.69 30852.33 00:16:32.148 [2024-11-19T07:31:41.398Z] =================================================================================================================== 00:16:32.148 [2024-11-19T07:31:41.398Z] Total : 9228.54 36.05 0.00 0.00 13841.73 0.00 30852.33 00:16:32.148 [2024-11-19 07:31:40.884588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:16:32.148 l0 00:16:32.148 07:31:40 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:32.149 [2024-11-19 07:31:41.066480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.066522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.149 [2024-11-19 07:31:41.066537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:32.149 [2024-11-19 07:31:41.066544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.066566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:32.149 [2024-11-19 07:31:41.069107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.069154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.149 [2024-11-19 07:31:41.069165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:16:32.149 [2024-11-19 07:31:41.069176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.071457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.071576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.149 [2024-11-19 07:31:41.071592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:16:32.149 [2024-11-19 07:31:41.071602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.278499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.278541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.149 [2024-11-19 07:31:41.278555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.879 ms 00:16:32.149 [2024-11-19 07:31:41.278565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.284641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.284670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:32.149 [2024-11-19 07:31:41.284680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.048 ms 00:16:32.149 [2024-11-19 07:31:41.284689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.308497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.308531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.149 [2024-11-19 07:31:41.308542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.763 ms 00:16:32.149 [2024-11-19 07:31:41.308553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.323783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.323913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.149 [2024-11-19 07:31:41.323929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.199 ms 00:16:32.149 [2024-11-19 07:31:41.323939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.324116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.324129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.149 [2024-11-19 07:31:41.324138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:32.149 [2024-11-19 07:31:41.324146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.347441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.347475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:32.149 [2024-11-19 07:31:41.347486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.281 ms 00:16:32.149 [2024-11-19 07:31:41.347495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.370653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.370770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:32.149 [2024-11-19 07:31:41.370784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.128 ms 00:16:32.149 [2024-11-19 07:31:41.370795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.149 [2024-11-19 07:31:41.393468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.149 [2024-11-19 07:31:41.393581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.149 [2024-11-19 07:31:41.393596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:16:32.149 [2024-11-19 07:31:41.393604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.409 [2024-11-19 07:31:41.416262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.409 [2024-11-19 07:31:41.416380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.409 [2024-11-19 07:31:41.416396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.602 ms 00:16:32.409 [2024-11-19 07:31:41.416405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.409 [2024-11-19 07:31:41.416430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.409 [2024-11-19 07:31:41.416446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.409 [2024-11-19 07:31:41.416888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.416997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.410 [2024-11-19 07:31:41.417321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.410 [2024-11-19 07:31:41.417329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0df7cb8c-adb3-4018-82d4-46b725b7a019 00:16:32.410 [2024-11-19 07:31:41.417340] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.410 [2024-11-19 07:31:41.417347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.410 [2024-11-19 07:31:41.417356] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.410 [2024-11-19 07:31:41.417363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.410 [2024-11-19 07:31:41.417371] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.410 [2024-11-19 07:31:41.417380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.410 [2024-11-19 07:31:41.417389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.410 [2024-11-19 07:31:41.417396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.410 [2024-11-19 07:31:41.417405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.410 [2024-11-19 07:31:41.417412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.410 [2024-11-19 07:31:41.417421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.410 [2024-11-19 07:31:41.417429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:16:32.410 [2024-11-19 07:31:41.417438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.429360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.410 [2024-11-19 07:31:41.429391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.410 [2024-11-19 07:31:41.429400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.895 ms 00:16:32.410 [2024-11-19 07:31:41.429413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.429605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.410 [2024-11-19 07:31:41.429615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.410 [2024-11-19 07:31:41.429623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:32.410 [2024-11-19 07:31:41.429631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.466602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.410 [2024-11-19 07:31:41.466644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.410 [2024-11-19 07:31:41.466656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.410 [2024-11-19 07:31:41.466666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.466724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.410 [2024-11-19 07:31:41.466733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.410 [2024-11-19 07:31:41.466740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.410 [2024-11-19 07:31:41.466749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.466812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.410 [2024-11-19 07:31:41.466824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.410 [2024-11-19 07:31:41.466831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.410 [2024-11-19 07:31:41.466844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.466858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.410 [2024-11-19 07:31:41.466867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.410 [2024-11-19 07:31:41.466875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.410 [2024-11-19 07:31:41.466883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.410 [2024-11-19 07:31:41.540243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.540288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.411 [2024-11-19 07:31:41.540300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.540311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.411 [2024-11-19 07:31:41.569366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.411 [2024-11-19 07:31:41.569453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.411 [2024-11-19 07:31:41.569520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.411 [2024-11-19 07:31:41.569630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.411 [2024-11-19 07:31:41.569684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.411 [2024-11-19 07:31:41.569742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.411 [2024-11-19 07:31:41.569804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.411 [2024-11-19 07:31:41.569812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.411 [2024-11-19 07:31:41.569821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.411 [2024-11-19 07:31:41.569937] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.421 ms, result 0 00:16:32.411 true 00:16:32.411 07:31:41 -- ftl/bdevperf.sh@37 -- # killprocess 71668 00:16:32.411 07:31:41 -- common/autotest_common.sh@936 -- # '[' -z 71668 ']' 00:16:32.411 07:31:41 -- common/autotest_common.sh@940 -- # kill -0 71668 00:16:32.411 07:31:41 -- common/autotest_common.sh@941 -- # uname 00:16:32.411 07:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.411 07:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71668 00:16:32.411 07:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:32.411 killing process with pid 71668 00:16:32.411 07:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:32.411 07:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71668' 00:16:32.411 07:31:41 -- common/autotest_common.sh@955 -- # kill 71668 00:16:32.411 Received shutdown signal, test time was about 4.000000 seconds 00:16:32.411 00:16:32.411 Latency(us) 00:16:32.411 [2024-11-19T07:31:41.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.411 [2024-11-19T07:31:41.661Z] =================================================================================================================== 00:16:32.411 [2024-11-19T07:31:41.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.411 07:31:41 -- common/autotest_common.sh@960 -- # wait 71668 00:16:37.677 07:31:46 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:37.677 07:31:46 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:37.677 07:31:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.677 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:16:37.677 07:31:46 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:37.677 07:31:46 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:37.677 Remove shared memory files 00:16:37.677 07:31:46 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:37.677 07:31:46 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:37.677 07:31:46 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:37.677 07:31:46 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:37.677 07:31:46 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:37.677 ************************************ 00:16:37.677 END TEST ftl_bdevperf 00:16:37.677 ************************************ 00:16:37.677 00:16:37.677 real 0m24.854s 00:16:37.677 user 0m27.230s 00:16:37.677 sys 0m0.884s 00:16:37.677 07:31:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:37.677 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:16:37.677 07:31:46 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:37.677 07:31:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.677 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:16:37.677 ************************************ 00:16:37.677 START TEST ftl_trim 00:16:37.677 ************************************ 00:16:37.677 07:31:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:37.677 * Looking for test storage... 00:16:37.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:37.677 07:31:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:37.677 07:31:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:37.677 07:31:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:37.677 07:31:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:37.677 07:31:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:37.677 07:31:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:37.677 07:31:46 -- scripts/common.sh@335 -- # IFS=.-: 00:16:37.677 07:31:46 -- scripts/common.sh@335 -- # read -ra ver1 00:16:37.677 07:31:46 -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.677 07:31:46 -- scripts/common.sh@336 -- # read -ra ver2 00:16:37.677 07:31:46 -- scripts/common.sh@337 -- # local 'op=<' 00:16:37.677 07:31:46 -- scripts/common.sh@339 -- # ver1_l=2 00:16:37.677 07:31:46 -- scripts/common.sh@340 -- # ver2_l=1 00:16:37.677 07:31:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:37.677 07:31:46 -- scripts/common.sh@343 -- # case "$op" in 00:16:37.677 07:31:46 -- scripts/common.sh@344 -- # : 1 00:16:37.677 07:31:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:37.677 07:31:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.677 07:31:46 -- scripts/common.sh@364 -- # decimal 1 00:16:37.677 07:31:46 -- scripts/common.sh@352 -- # local d=1 00:16:37.677 07:31:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.677 07:31:46 -- scripts/common.sh@354 -- # echo 1 00:16:37.677 07:31:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:37.677 07:31:46 -- scripts/common.sh@365 -- # decimal 2 00:16:37.677 07:31:46 -- scripts/common.sh@352 -- # local d=2 00:16:37.677 07:31:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.677 07:31:46 -- scripts/common.sh@354 -- # echo 2 00:16:37.677 07:31:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:37.677 07:31:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:37.677 07:31:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:37.677 07:31:46 -- scripts/common.sh@367 -- # return 0 00:16:37.677 07:31:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.677 --rc genhtml_branch_coverage=1 00:16:37.677 --rc genhtml_function_coverage=1 00:16:37.677 --rc genhtml_legend=1 00:16:37.677 --rc geninfo_all_blocks=1 00:16:37.677 --rc geninfo_unexecuted_blocks=1 00:16:37.677 00:16:37.677 ' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.677 --rc genhtml_branch_coverage=1 00:16:37.677 --rc genhtml_function_coverage=1 00:16:37.677 --rc genhtml_legend=1 00:16:37.677 --rc geninfo_all_blocks=1 00:16:37.677 --rc geninfo_unexecuted_blocks=1 00:16:37.677 00:16:37.677 ' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.677 --rc genhtml_branch_coverage=1 00:16:37.677 --rc genhtml_function_coverage=1 00:16:37.677 --rc genhtml_legend=1 00:16:37.677 --rc geninfo_all_blocks=1 00:16:37.677 --rc geninfo_unexecuted_blocks=1 00:16:37.677 00:16:37.677 ' 00:16:37.677 07:31:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.677 --rc genhtml_branch_coverage=1 00:16:37.677 --rc genhtml_function_coverage=1 00:16:37.677 --rc genhtml_legend=1 00:16:37.677 --rc geninfo_all_blocks=1 00:16:37.677 --rc geninfo_unexecuted_blocks=1 00:16:37.677 00:16:37.677 ' 00:16:37.677 07:31:46 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:37.677 07:31:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:37.677 07:31:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:37.677 07:31:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:37.677 07:31:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:37.677 07:31:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:37.677 07:31:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.677 07:31:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:37.677 07:31:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:37.677 07:31:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:37.677 07:31:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:37.677 07:31:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:37.677 07:31:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:37.677 07:31:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:37.677 07:31:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:37.677 07:31:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:37.677 07:31:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:37.677 07:31:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:37.677 07:31:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:37.677 07:31:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:37.677 07:31:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:37.677 07:31:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:37.677 07:31:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:37.677 07:31:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:37.677 07:31:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:37.677 07:31:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:37.677 07:31:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:37.677 07:31:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:37.677 07:31:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:37.677 07:31:46 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.677 07:31:46 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:37.677 07:31:46 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:37.677 07:31:46 -- ftl/trim.sh@25 -- # timeout=240 00:16:37.677 07:31:46 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:37.677 07:31:46 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:37.677 07:31:46 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:37.677 07:31:46 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:37.677 07:31:46 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:37.677 07:31:46 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:37.677 07:31:46 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:37.677 07:31:46 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:37.677 07:31:46 -- ftl/trim.sh@40 -- # svcpid=72051 00:16:37.677 07:31:46 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:37.677 07:31:46 -- ftl/trim.sh@41 -- # waitforlisten 72051 00:16:37.677 07:31:46 -- common/autotest_common.sh@829 -- # '[' -z 72051 ']' 00:16:37.677 07:31:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.677 07:31:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.677 07:31:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.677 07:31:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.677 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:16:37.677 [2024-11-19 07:31:46.690208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.677 [2024-11-19 07:31:46.690785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72051 ] 00:16:37.677 [2024-11-19 07:31:46.837721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.936 [2024-11-19 07:31:47.018431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.936 [2024-11-19 07:31:47.019006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.936 [2024-11-19 07:31:47.019194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.936 [2024-11-19 07:31:47.019228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.310 07:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.310 07:31:48 -- common/autotest_common.sh@862 -- # return 0 00:16:39.310 07:31:48 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:39.310 07:31:48 -- ftl/common.sh@54 -- # local name=nvme0 00:16:39.310 07:31:48 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:39.310 07:31:48 -- ftl/common.sh@56 -- # local size=103424 00:16:39.310 07:31:48 -- ftl/common.sh@59 -- # local base_bdev 00:16:39.310 07:31:48 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:39.310 07:31:48 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:39.310 07:31:48 -- ftl/common.sh@62 -- # local base_size 00:16:39.310 07:31:48 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:39.310 07:31:48 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:39.310 07:31:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:39.310 07:31:48 -- common/autotest_common.sh@1369 -- # local bs 00:16:39.310 07:31:48 -- common/autotest_common.sh@1370 -- # local nb 00:16:39.310 07:31:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:39.569 07:31:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:39.569 { 00:16:39.569 "name": "nvme0n1", 00:16:39.569 "aliases": [ 00:16:39.569 "9a64b6a1-f60b-47c4-8cd6-329d8661e9a4" 00:16:39.569 ], 00:16:39.569 "product_name": "NVMe disk", 00:16:39.569 "block_size": 4096, 00:16:39.569 "num_blocks": 1310720, 00:16:39.569 "uuid": "9a64b6a1-f60b-47c4-8cd6-329d8661e9a4", 00:16:39.569 "assigned_rate_limits": { 00:16:39.569 "rw_ios_per_sec": 0, 00:16:39.569 "rw_mbytes_per_sec": 0, 00:16:39.569 "r_mbytes_per_sec": 0, 00:16:39.569 "w_mbytes_per_sec": 0 00:16:39.569 }, 00:16:39.569 "claimed": true, 00:16:39.569 "claim_type": "read_many_write_one", 00:16:39.569 "zoned": false, 00:16:39.569 "supported_io_types": { 00:16:39.569 "read": true, 00:16:39.569 "write": true, 00:16:39.569 "unmap": true, 00:16:39.569 "write_zeroes": true, 00:16:39.569 "flush": true, 00:16:39.569 "reset": true, 00:16:39.569 "compare": true, 00:16:39.569 "compare_and_write": false, 00:16:39.569 "abort": true, 00:16:39.569 "nvme_admin": true, 00:16:39.569 "nvme_io": true 00:16:39.569 }, 00:16:39.569 "driver_specific": { 00:16:39.569 "nvme": [ 00:16:39.569 { 00:16:39.569 "pci_address": "0000:00:07.0", 00:16:39.569 "trid": { 00:16:39.569 "trtype": "PCIe", 00:16:39.569 "traddr": "0000:00:07.0" 00:16:39.569 }, 00:16:39.569 "ctrlr_data": { 00:16:39.569 "cntlid": 0, 00:16:39.569 "vendor_id": "0x1b36", 00:16:39.569 "model_number": "QEMU NVMe Ctrl", 00:16:39.569 "serial_number": "12341", 00:16:39.569 "firmware_revision": "8.0.0", 00:16:39.569 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:39.569 "oacs": { 00:16:39.569 "security": 0, 00:16:39.569 "format": 1, 00:16:39.569 "firmware": 0, 00:16:39.569 "ns_manage": 1 00:16:39.569 }, 00:16:39.569 "multi_ctrlr": false, 00:16:39.569 "ana_reporting": false 00:16:39.569 }, 00:16:39.569 "vs": { 00:16:39.569 "nvme_version": "1.4" 00:16:39.569 }, 00:16:39.569 "ns_data": { 00:16:39.569 "id": 1, 00:16:39.569 "can_share": false 00:16:39.569 } 00:16:39.569 } 00:16:39.569 ], 00:16:39.569 "mp_policy": "active_passive" 00:16:39.569 } 00:16:39.569 } 00:16:39.569 ]' 00:16:39.569 07:31:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:39.569 07:31:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:39.569 07:31:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:39.569 07:31:48 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:39.569 07:31:48 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:39.569 07:31:48 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:39.569 07:31:48 -- ftl/common.sh@63 -- # base_size=5120 00:16:39.569 07:31:48 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:39.569 07:31:48 -- ftl/common.sh@67 -- # clear_lvols 00:16:39.569 07:31:48 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:39.569 07:31:48 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:39.827 07:31:48 -- ftl/common.sh@28 -- # stores=0c90456f-b674-4bf9-8bee-6ab97c1417fc 00:16:39.827 07:31:48 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:39.827 07:31:48 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c90456f-b674-4bf9-8bee-6ab97c1417fc 00:16:40.125 07:31:49 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:40.125 07:31:49 -- ftl/common.sh@68 -- # lvs=5466d1a8-07da-460d-bfc3-52ec814bfb86 00:16:40.125 07:31:49 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5466d1a8-07da-460d-bfc3-52ec814bfb86 00:16:40.414 07:31:49 -- ftl/trim.sh@43 -- # split_bdev=8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.414 07:31:49 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.414 07:31:49 -- ftl/common.sh@35 -- # local name=nvc0 00:16:40.414 07:31:49 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:40.414 07:31:49 -- ftl/common.sh@37 -- # local base_bdev=8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.414 07:31:49 -- ftl/common.sh@38 -- # local cache_size= 00:16:40.414 07:31:49 -- ftl/common.sh@41 -- # get_bdev_size 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.414 07:31:49 -- common/autotest_common.sh@1367 -- # local bdev_name=8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.414 07:31:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:40.414 07:31:49 -- common/autotest_common.sh@1369 -- # local bs 00:16:40.414 07:31:49 -- common/autotest_common.sh@1370 -- # local nb 00:16:40.414 07:31:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.672 07:31:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:40.672 { 00:16:40.672 "name": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:40.672 "aliases": [ 00:16:40.672 "lvs/nvme0n1p0" 00:16:40.672 ], 00:16:40.672 "product_name": "Logical Volume", 00:16:40.672 "block_size": 4096, 00:16:40.672 "num_blocks": 26476544, 00:16:40.672 "uuid": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:40.672 "assigned_rate_limits": { 00:16:40.672 "rw_ios_per_sec": 0, 00:16:40.672 "rw_mbytes_per_sec": 0, 00:16:40.672 "r_mbytes_per_sec": 0, 00:16:40.672 "w_mbytes_per_sec": 0 00:16:40.672 }, 00:16:40.672 "claimed": false, 00:16:40.672 "zoned": false, 00:16:40.672 "supported_io_types": { 00:16:40.672 "read": true, 00:16:40.672 "write": true, 00:16:40.672 "unmap": true, 00:16:40.672 "write_zeroes": true, 00:16:40.672 "flush": false, 00:16:40.672 "reset": true, 00:16:40.672 "compare": false, 00:16:40.672 "compare_and_write": false, 00:16:40.672 "abort": false, 00:16:40.672 "nvme_admin": false, 00:16:40.672 "nvme_io": false 00:16:40.672 }, 00:16:40.672 "driver_specific": { 00:16:40.672 "lvol": { 00:16:40.672 "lvol_store_uuid": "5466d1a8-07da-460d-bfc3-52ec814bfb86", 00:16:40.672 "base_bdev": "nvme0n1", 00:16:40.672 "thin_provision": true, 00:16:40.672 "snapshot": false, 00:16:40.672 "clone": false, 00:16:40.672 "esnap_clone": false 00:16:40.672 } 00:16:40.672 } 00:16:40.672 } 00:16:40.672 ]' 00:16:40.672 07:31:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:40.672 07:31:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:40.672 07:31:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:40.672 07:31:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:40.672 07:31:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:40.672 07:31:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:40.672 07:31:49 -- ftl/common.sh@41 -- # local base_size=5171 00:16:40.672 07:31:49 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:40.672 07:31:49 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:40.932 07:31:49 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:40.932 07:31:49 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:40.932 07:31:49 -- ftl/common.sh@48 -- # get_bdev_size 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.932 07:31:49 -- common/autotest_common.sh@1367 -- # local bdev_name=8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.932 07:31:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:40.932 07:31:49 -- common/autotest_common.sh@1369 -- # local bs 00:16:40.932 07:31:49 -- common/autotest_common.sh@1370 -- # local nb 00:16:40.932 07:31:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:40.932 07:31:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:40.932 { 00:16:40.932 "name": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:40.932 "aliases": [ 00:16:40.932 "lvs/nvme0n1p0" 00:16:40.932 ], 00:16:40.932 "product_name": "Logical Volume", 00:16:40.932 "block_size": 4096, 00:16:40.932 "num_blocks": 26476544, 00:16:40.932 "uuid": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:40.932 "assigned_rate_limits": { 00:16:40.932 "rw_ios_per_sec": 0, 00:16:40.932 "rw_mbytes_per_sec": 0, 00:16:40.932 "r_mbytes_per_sec": 0, 00:16:40.932 "w_mbytes_per_sec": 0 00:16:40.932 }, 00:16:40.932 "claimed": false, 00:16:40.932 "zoned": false, 00:16:40.932 "supported_io_types": { 00:16:40.932 "read": true, 00:16:40.932 "write": true, 00:16:40.932 "unmap": true, 00:16:40.932 "write_zeroes": true, 00:16:40.932 "flush": false, 00:16:40.932 "reset": true, 00:16:40.932 "compare": false, 00:16:40.932 "compare_and_write": false, 00:16:40.932 "abort": false, 00:16:40.932 "nvme_admin": false, 00:16:40.932 "nvme_io": false 00:16:40.932 }, 00:16:40.932 "driver_specific": { 00:16:40.932 "lvol": { 00:16:40.932 "lvol_store_uuid": "5466d1a8-07da-460d-bfc3-52ec814bfb86", 00:16:40.932 "base_bdev": "nvme0n1", 00:16:40.932 "thin_provision": true, 00:16:40.932 "snapshot": false, 00:16:40.932 "clone": false, 00:16:40.932 "esnap_clone": false 00:16:40.932 } 00:16:40.932 } 00:16:40.932 } 00:16:40.932 ]' 00:16:40.932 07:31:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:41.193 07:31:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:41.193 07:31:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:41.193 07:31:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:41.193 07:31:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:41.193 07:31:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:41.193 07:31:50 -- ftl/common.sh@48 -- # cache_size=5171 00:16:41.193 07:31:50 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:41.193 07:31:50 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:41.193 07:31:50 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:41.193 07:31:50 -- ftl/trim.sh@47 -- # get_bdev_size 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:41.193 07:31:50 -- common/autotest_common.sh@1367 -- # local bdev_name=8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:41.193 07:31:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:41.193 07:31:50 -- common/autotest_common.sh@1369 -- # local bs 00:16:41.193 07:31:50 -- common/autotest_common.sh@1370 -- # local nb 00:16:41.193 07:31:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f29835b-7eeb-4d38-b384-7a71e3a11801 00:16:41.454 07:31:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:41.454 { 00:16:41.454 "name": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:41.454 "aliases": [ 00:16:41.454 "lvs/nvme0n1p0" 00:16:41.454 ], 00:16:41.454 "product_name": "Logical Volume", 00:16:41.454 "block_size": 4096, 00:16:41.454 "num_blocks": 26476544, 00:16:41.454 "uuid": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:41.454 "assigned_rate_limits": { 00:16:41.454 "rw_ios_per_sec": 0, 00:16:41.454 "rw_mbytes_per_sec": 0, 00:16:41.454 "r_mbytes_per_sec": 0, 00:16:41.454 "w_mbytes_per_sec": 0 00:16:41.454 }, 00:16:41.454 "claimed": false, 00:16:41.454 "zoned": false, 00:16:41.454 "supported_io_types": { 00:16:41.454 "read": true, 00:16:41.454 "write": true, 00:16:41.454 "unmap": true, 00:16:41.454 "write_zeroes": true, 00:16:41.454 "flush": false, 00:16:41.454 "reset": true, 00:16:41.454 "compare": false, 00:16:41.454 "compare_and_write": false, 00:16:41.454 "abort": false, 00:16:41.454 "nvme_admin": false, 00:16:41.454 "nvme_io": false 00:16:41.454 }, 00:16:41.454 "driver_specific": { 00:16:41.454 "lvol": { 00:16:41.454 "lvol_store_uuid": "5466d1a8-07da-460d-bfc3-52ec814bfb86", 00:16:41.454 "base_bdev": "nvme0n1", 00:16:41.454 "thin_provision": true, 00:16:41.454 "snapshot": false, 00:16:41.454 "clone": false, 00:16:41.454 "esnap_clone": false 00:16:41.454 } 00:16:41.454 } 00:16:41.454 } 00:16:41.454 ]' 00:16:41.454 07:31:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:41.454 07:31:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:41.454 07:31:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:41.454 07:31:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:41.454 07:31:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:41.454 07:31:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:41.454 07:31:50 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:41.454 07:31:50 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8f29835b-7eeb-4d38-b384-7a71e3a11801 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:41.714 [2024-11-19 07:31:50.851369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.714 [2024-11-19 07:31:50.851404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:41.714 [2024-11-19 07:31:50.851417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:41.714 [2024-11-19 07:31:50.851424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.714 [2024-11-19 07:31:50.853617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.714 [2024-11-19 07:31:50.853641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.715 [2024-11-19 07:31:50.853651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:16:41.715 [2024-11-19 07:31:50.853657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.853726] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:41.715 [2024-11-19 07:31:50.854288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:41.715 [2024-11-19 07:31:50.854307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.854314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.715 [2024-11-19 07:31:50.854322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:16:41.715 [2024-11-19 07:31:50.854328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.854506] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:16:41.715 [2024-11-19 07:31:50.855426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.855453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:41.715 [2024-11-19 07:31:50.855461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:41.715 [2024-11-19 07:31:50.855469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.860329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.860412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.715 [2024-11-19 07:31:50.860451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:16:41.715 [2024-11-19 07:31:50.860470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.860600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.860895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.715 [2024-11-19 07:31:50.860958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:41.715 [2024-11-19 07:31:50.860984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.861033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.861054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:41.715 [2024-11-19 07:31:50.861167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:41.715 [2024-11-19 07:31:50.861199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.861252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:41.715 [2024-11-19 07:31:50.864165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.864255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.715 [2024-11-19 07:31:50.864302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:16:41.715 [2024-11-19 07:31:50.864320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.864378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.864394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:41.715 [2024-11-19 07:31:50.864411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:41.715 [2024-11-19 07:31:50.864516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.864563] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:41.715 [2024-11-19 07:31:50.864661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:41.715 [2024-11-19 07:31:50.864723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:41.715 [2024-11-19 07:31:50.864748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:41.715 [2024-11-19 07:31:50.864794] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:41.715 [2024-11-19 07:31:50.864819] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:41.715 [2024-11-19 07:31:50.864845] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:41.715 [2024-11-19 07:31:50.864859] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:41.715 [2024-11-19 07:31:50.864876] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:41.715 [2024-11-19 07:31:50.864890] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:41.715 [2024-11-19 07:31:50.864945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.864962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:41.715 [2024-11-19 07:31:50.864978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:16:41.715 [2024-11-19 07:31:50.864992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.865059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.715 [2024-11-19 07:31:50.865075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:41.715 [2024-11-19 07:31:50.865092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:41.715 [2024-11-19 07:31:50.865139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.715 [2024-11-19 07:31:50.865244] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:41.715 [2024-11-19 07:31:50.865263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:41.715 [2024-11-19 07:31:50.865280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:41.715 [2024-11-19 07:31:50.865358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:41.715 [2024-11-19 07:31:50.865406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.715 [2024-11-19 07:31:50.865434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:41.715 [2024-11-19 07:31:50.865471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:41.715 [2024-11-19 07:31:50.865490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.715 [2024-11-19 07:31:50.865527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:41.715 [2024-11-19 07:31:50.865546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:41.715 [2024-11-19 07:31:50.865560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:41.715 [2024-11-19 07:31:50.865611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:41.715 [2024-11-19 07:31:50.865625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:41.715 [2024-11-19 07:31:50.865712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:41.715 [2024-11-19 07:31:50.865729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:41.715 [2024-11-19 07:31:50.865758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:41.715 [2024-11-19 07:31:50.865802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:41.715 [2024-11-19 07:31:50.865911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:41.715 [2024-11-19 07:31:50.865926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:41.715 [2024-11-19 07:31:50.865940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:41.715 [2024-11-19 07:31:50.865956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:41.715 [2024-11-19 07:31:50.866032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:41.715 [2024-11-19 07:31:50.866050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:41.715 [2024-11-19 07:31:50.866064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:41.715 [2024-11-19 07:31:50.866079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.715 [2024-11-19 07:31:50.866093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:41.715 [2024-11-19 07:31:50.866108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:41.715 [2024-11-19 07:31:50.866122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.715 [2024-11-19 07:31:50.866138] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:41.715 [2024-11-19 07:31:50.866225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:41.715 [2024-11-19 07:31:50.866238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.715 [2024-11-19 07:31:50.866244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.715 [2024-11-19 07:31:50.866253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:41.715 [2024-11-19 07:31:50.866259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:41.715 [2024-11-19 07:31:50.866266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:41.715 [2024-11-19 07:31:50.866271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:41.715 [2024-11-19 07:31:50.866280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:41.715 [2024-11-19 07:31:50.866285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:41.715 [2024-11-19 07:31:50.866293] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:41.716 [2024-11-19 07:31:50.866301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.716 [2024-11-19 07:31:50.866309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:41.716 [2024-11-19 07:31:50.866315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:41.716 [2024-11-19 07:31:50.866322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:41.716 [2024-11-19 07:31:50.866328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:41.716 [2024-11-19 07:31:50.866334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:41.716 [2024-11-19 07:31:50.866340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:41.716 [2024-11-19 07:31:50.866346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:41.716 [2024-11-19 07:31:50.866352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:41.716 [2024-11-19 07:31:50.866358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:41.716 [2024-11-19 07:31:50.866364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:41.716 [2024-11-19 07:31:50.866371] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:41.716 [2024-11-19 07:31:50.866376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:41.716 [2024-11-19 07:31:50.866386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:41.716 [2024-11-19 07:31:50.866391] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:41.716 [2024-11-19 07:31:50.866399] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.716 [2024-11-19 07:31:50.866405] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:41.716 [2024-11-19 07:31:50.866412] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:41.716 [2024-11-19 07:31:50.866417] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:41.716 [2024-11-19 07:31:50.866424] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:41.716 [2024-11-19 07:31:50.866430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.866437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:41.716 [2024-11-19 07:31:50.866443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:16:41.716 [2024-11-19 07:31:50.866449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.878364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.878394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.716 [2024-11-19 07:31:50.878402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.856 ms 00:16:41.716 [2024-11-19 07:31:50.878409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.878502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.878512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:41.716 [2024-11-19 07:31:50.878520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:41.716 [2024-11-19 07:31:50.878526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.903378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.903486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.716 [2024-11-19 07:31:50.903498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:16:41.716 [2024-11-19 07:31:50.903506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.903554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.903563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.716 [2024-11-19 07:31:50.903570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:41.716 [2024-11-19 07:31:50.903580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.903860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.903874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.716 [2024-11-19 07:31:50.903880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:16:41.716 [2024-11-19 07:31:50.903886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.903976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.903984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.716 [2024-11-19 07:31:50.903991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:41.716 [2024-11-19 07:31:50.903998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.926545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.926597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.716 [2024-11-19 07:31:50.926616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.519 ms 00:16:41.716 [2024-11-19 07:31:50.926632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.716 [2024-11-19 07:31:50.938443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:41.716 [2024-11-19 07:31:50.950562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.716 [2024-11-19 07:31:50.950587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:41.716 [2024-11-19 07:31:50.950597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.784 ms 00:16:41.716 [2024-11-19 07:31:50.950604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.974 [2024-11-19 07:31:51.019846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.974 [2024-11-19 07:31:51.019882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:41.974 [2024-11-19 07:31:51.019894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.188 ms 00:16:41.974 [2024-11-19 07:31:51.019901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.974 [2024-11-19 07:31:51.019943] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:41.974 [2024-11-19 07:31:51.019952] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:44.513 [2024-11-19 07:31:53.411555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.411614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:44.513 [2024-11-19 07:31:53.411632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2391.595 ms 00:16:44.513 [2024-11-19 07:31:53.411641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.411861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.411875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.513 [2024-11-19 07:31:53.411886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:44.513 [2024-11-19 07:31:53.411893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.435398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.435432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:44.513 [2024-11-19 07:31:53.435446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.467 ms 00:16:44.513 [2024-11-19 07:31:53.435454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.457628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.457785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:44.513 [2024-11-19 07:31:53.457809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.114 ms 00:16:44.513 [2024-11-19 07:31:53.457816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.458146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.458158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.513 [2024-11-19 07:31:53.458168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:44.513 [2024-11-19 07:31:53.458177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.534595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.534638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:44.513 [2024-11-19 07:31:53.534654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.363 ms 00:16:44.513 [2024-11-19 07:31:53.534663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.558873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.558913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:44.513 [2024-11-19 07:31:53.558927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.132 ms 00:16:44.513 [2024-11-19 07:31:53.558935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.562568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.562712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:44.513 [2024-11-19 07:31:53.562734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:16:44.513 [2024-11-19 07:31:53.562741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.585758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.585882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.513 [2024-11-19 07:31:53.585901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.963 ms 00:16:44.513 [2024-11-19 07:31:53.585908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.585967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.585977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.513 [2024-11-19 07:31:53.585987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:44.513 [2024-11-19 07:31:53.585994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.586072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.513 [2024-11-19 07:31:53.586093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.513 [2024-11-19 07:31:53.586102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:44.513 [2024-11-19 07:31:53.586109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.513 [2024-11-19 07:31:53.586887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.513 [2024-11-19 07:31:53.590003] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2735.241 ms, result 0 00:16:44.513 [2024-11-19 07:31:53.590807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.513 { 00:16:44.513 "name": "ftl0", 00:16:44.513 "uuid": "0a4382e0-c346-4fea-b0fa-834ba9590bee" 00:16:44.513 } 00:16:44.513 07:31:53 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:44.513 07:31:53 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:44.513 07:31:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.513 07:31:53 -- common/autotest_common.sh@899 -- # local i 00:16:44.513 07:31:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.513 07:31:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.513 07:31:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.775 07:31:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:44.775 [ 00:16:44.775 { 00:16:44.775 "name": "ftl0", 00:16:44.775 "aliases": [ 00:16:44.775 "0a4382e0-c346-4fea-b0fa-834ba9590bee" 00:16:44.775 ], 00:16:44.775 "product_name": "FTL disk", 00:16:44.775 "block_size": 4096, 00:16:44.775 "num_blocks": 23592960, 00:16:44.775 "uuid": "0a4382e0-c346-4fea-b0fa-834ba9590bee", 00:16:44.775 "assigned_rate_limits": { 00:16:44.775 "rw_ios_per_sec": 0, 00:16:44.776 "rw_mbytes_per_sec": 0, 00:16:44.776 "r_mbytes_per_sec": 0, 00:16:44.776 "w_mbytes_per_sec": 0 00:16:44.776 }, 00:16:44.776 "claimed": false, 00:16:44.776 "zoned": false, 00:16:44.776 "supported_io_types": { 00:16:44.776 "read": true, 00:16:44.776 "write": true, 00:16:44.776 "unmap": true, 00:16:44.776 "write_zeroes": true, 00:16:44.776 "flush": true, 00:16:44.776 "reset": false, 00:16:44.776 "compare": false, 00:16:44.776 "compare_and_write": false, 00:16:44.776 "abort": false, 00:16:44.776 "nvme_admin": false, 00:16:44.776 "nvme_io": false 00:16:44.776 }, 00:16:44.776 "driver_specific": { 00:16:44.776 "ftl": { 00:16:44.776 "base_bdev": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:44.776 "cache": "nvc0n1p0" 00:16:44.776 } 00:16:44.776 } 00:16:44.776 } 00:16:44.776 ] 00:16:44.776 07:31:53 -- common/autotest_common.sh@905 -- # return 0 00:16:44.776 07:31:53 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:44.776 07:31:53 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:45.035 07:31:54 -- ftl/trim.sh@56 -- # echo ']}' 00:16:45.035 07:31:54 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:45.297 07:31:54 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:45.297 { 00:16:45.297 "name": "ftl0", 00:16:45.297 "aliases": [ 00:16:45.297 "0a4382e0-c346-4fea-b0fa-834ba9590bee" 00:16:45.297 ], 00:16:45.297 "product_name": "FTL disk", 00:16:45.297 "block_size": 4096, 00:16:45.297 "num_blocks": 23592960, 00:16:45.297 "uuid": "0a4382e0-c346-4fea-b0fa-834ba9590bee", 00:16:45.297 "assigned_rate_limits": { 00:16:45.297 "rw_ios_per_sec": 0, 00:16:45.297 "rw_mbytes_per_sec": 0, 00:16:45.297 "r_mbytes_per_sec": 0, 00:16:45.297 "w_mbytes_per_sec": 0 00:16:45.297 }, 00:16:45.297 "claimed": false, 00:16:45.297 "zoned": false, 00:16:45.297 "supported_io_types": { 00:16:45.297 "read": true, 00:16:45.297 "write": true, 00:16:45.297 "unmap": true, 00:16:45.297 "write_zeroes": true, 00:16:45.297 "flush": true, 00:16:45.297 "reset": false, 00:16:45.297 "compare": false, 00:16:45.297 "compare_and_write": false, 00:16:45.297 "abort": false, 00:16:45.297 "nvme_admin": false, 00:16:45.297 "nvme_io": false 00:16:45.297 }, 00:16:45.297 "driver_specific": { 00:16:45.297 "ftl": { 00:16:45.297 "base_bdev": "8f29835b-7eeb-4d38-b384-7a71e3a11801", 00:16:45.297 "cache": "nvc0n1p0" 00:16:45.297 } 00:16:45.297 } 00:16:45.297 } 00:16:45.297 ]' 00:16:45.297 07:31:54 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:45.297 07:31:54 -- ftl/trim.sh@60 -- # nb=23592960 00:16:45.297 07:31:54 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:45.557 [2024-11-19 07:31:54.557532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.557576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:45.557 [2024-11-19 07:31:54.557589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.557 [2024-11-19 07:31:54.557598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.557630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:45.557 [2024-11-19 07:31:54.560094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.560123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:45.557 [2024-11-19 07:31:54.560138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:16:45.557 [2024-11-19 07:31:54.560146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.560680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.560699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:45.557 [2024-11-19 07:31:54.560711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:16:45.557 [2024-11-19 07:31:54.560719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.564374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.564394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:45.557 [2024-11-19 07:31:54.564407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:16:45.557 [2024-11-19 07:31:54.564416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.571286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.571414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:45.557 [2024-11-19 07:31:54.571434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:16:45.557 [2024-11-19 07:31:54.571443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.594825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.594939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:45.557 [2024-11-19 07:31:54.594958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:16:45.557 [2024-11-19 07:31:54.594965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.609551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.609667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:45.557 [2024-11-19 07:31:54.609686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:16:45.557 [2024-11-19 07:31:54.609694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.609894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.609905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:45.557 [2024-11-19 07:31:54.609918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:45.557 [2024-11-19 07:31:54.609925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.632788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.632897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:45.557 [2024-11-19 07:31:54.632915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:16:45.557 [2024-11-19 07:31:54.632922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.655628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.655728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:45.557 [2024-11-19 07:31:54.655781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.656 ms 00:16:45.557 [2024-11-19 07:31:54.655803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.678539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.678642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:45.557 [2024-11-19 07:31:54.678691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.671 ms 00:16:45.557 [2024-11-19 07:31:54.678712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.701039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.557 [2024-11-19 07:31:54.701142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:45.557 [2024-11-19 07:31:54.701216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:16:45.557 [2024-11-19 07:31:54.701239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.557 [2024-11-19 07:31:54.701304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:45.557 [2024-11-19 07:31:54.701334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:45.557 [2024-11-19 07:31:54.701564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.701995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.702893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.703956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.704985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:45.558 [2024-11-19 07:31:54.705411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:45.559 [2024-11-19 07:31:54.705469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:45.559 [2024-11-19 07:31:54.705479] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:16:45.559 [2024-11-19 07:31:54.705487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:45.559 [2024-11-19 07:31:54.705495] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:45.559 [2024-11-19 07:31:54.705502] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:45.559 [2024-11-19 07:31:54.705511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:45.559 [2024-11-19 07:31:54.705518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:45.559 [2024-11-19 07:31:54.705527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:45.559 [2024-11-19 07:31:54.705534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:45.559 [2024-11-19 07:31:54.705543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:45.559 [2024-11-19 07:31:54.705549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:45.559 [2024-11-19 07:31:54.705560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.559 [2024-11-19 07:31:54.705569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:45.559 [2024-11-19 07:31:54.705579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:16:45.559 [2024-11-19 07:31:54.705586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.717563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.559 [2024-11-19 07:31:54.717594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:45.559 [2024-11-19 07:31:54.717606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.907 ms 00:16:45.559 [2024-11-19 07:31:54.717613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.717836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.559 [2024-11-19 07:31:54.717854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:45.559 [2024-11-19 07:31:54.717865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:16:45.559 [2024-11-19 07:31:54.717872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.761481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.559 [2024-11-19 07:31:54.761511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.559 [2024-11-19 07:31:54.761525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.559 [2024-11-19 07:31:54.761533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.761621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.559 [2024-11-19 07:31:54.761631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.559 [2024-11-19 07:31:54.761640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.559 [2024-11-19 07:31:54.761647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.761702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.559 [2024-11-19 07:31:54.761711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.559 [2024-11-19 07:31:54.761720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.559 [2024-11-19 07:31:54.761727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.559 [2024-11-19 07:31:54.761754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.559 [2024-11-19 07:31:54.761763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.559 [2024-11-19 07:31:54.761772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.559 [2024-11-19 07:31:54.761779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.819 [2024-11-19 07:31:54.845783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.819 [2024-11-19 07:31:54.845936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.820 [2024-11-19 07:31:54.845957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.845964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.874982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.820 [2024-11-19 07:31:54.875028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.820 [2024-11-19 07:31:54.875114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.820 [2024-11-19 07:31:54.875202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.820 [2024-11-19 07:31:54.875339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:45.820 [2024-11-19 07:31:54.875415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.820 [2024-11-19 07:31:54.875484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.820 [2024-11-19 07:31:54.875554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.820 [2024-11-19 07:31:54.875565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.820 [2024-11-19 07:31:54.875572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.820 [2024-11-19 07:31:54.875746] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.193 ms, result 0 00:16:45.820 true 00:16:45.820 07:31:54 -- ftl/trim.sh@63 -- # killprocess 72051 00:16:45.820 07:31:54 -- common/autotest_common.sh@936 -- # '[' -z 72051 ']' 00:16:45.820 07:31:54 -- common/autotest_common.sh@940 -- # kill -0 72051 00:16:45.820 07:31:54 -- common/autotest_common.sh@941 -- # uname 00:16:45.820 07:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.820 07:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72051 00:16:45.820 killing process with pid 72051 00:16:45.820 07:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.820 07:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.820 07:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72051' 00:16:45.820 07:31:54 -- common/autotest_common.sh@955 -- # kill 72051 00:16:45.820 07:31:54 -- common/autotest_common.sh@960 -- # wait 72051 00:16:51.094 07:32:00 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:52.036 65536+0 records in 00:16:52.036 65536+0 records out 00:16:52.036 268435456 bytes (268 MB, 256 MiB) copied, 1.09671 s, 245 MB/s 00:16:52.036 07:32:01 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.297 [2024-11-19 07:32:01.303042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:52.297 [2024-11-19 07:32:01.303164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72273 ] 00:16:52.297 [2024-11-19 07:32:01.451491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.558 [2024-11-19 07:32:01.590008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.558 [2024-11-19 07:32:01.795575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.558 [2024-11-19 07:32:01.795625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.821 [2024-11-19 07:32:01.940633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.821 [2024-11-19 07:32:01.940673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:52.821 [2024-11-19 07:32:01.940684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:52.821 [2024-11-19 07:32:01.940690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.821 [2024-11-19 07:32:01.942760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.821 [2024-11-19 07:32:01.942790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.821 [2024-11-19 07:32:01.942797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:16:52.821 [2024-11-19 07:32:01.942803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.821 [2024-11-19 07:32:01.942859] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:52.821 [2024-11-19 07:32:01.943426] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:52.821 [2024-11-19 07:32:01.943438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.821 [2024-11-19 07:32:01.943444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.821 [2024-11-19 07:32:01.943451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:16:52.821 [2024-11-19 07:32:01.943456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.821 [2024-11-19 07:32:01.944596] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:52.821 [2024-11-19 07:32:01.954240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.821 [2024-11-19 07:32:01.954267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:52.821 [2024-11-19 07:32:01.954275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.644 ms 00:16:52.821 [2024-11-19 07:32:01.954281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.821 [2024-11-19 07:32:01.954387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.821 [2024-11-19 07:32:01.954397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:52.821 [2024-11-19 07:32:01.954403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:52.821 [2024-11-19 07:32:01.954409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.959076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.959101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.822 [2024-11-19 07:32:01.959108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:16:52.822 [2024-11-19 07:32:01.959117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.959214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.959222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.822 [2024-11-19 07:32:01.959228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:52.822 [2024-11-19 07:32:01.959234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.959254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.959260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:52.822 [2024-11-19 07:32:01.959266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:52.822 [2024-11-19 07:32:01.959271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.959294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:52.822 [2024-11-19 07:32:01.962115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.962248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.822 [2024-11-19 07:32:01.962261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.831 ms 00:16:52.822 [2024-11-19 07:32:01.962271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.962303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.962310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:52.822 [2024-11-19 07:32:01.962316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:52.822 [2024-11-19 07:32:01.962322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.962336] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:52.822 [2024-11-19 07:32:01.962350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:52.822 [2024-11-19 07:32:01.962376] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:52.822 [2024-11-19 07:32:01.962389] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:52.822 [2024-11-19 07:32:01.962446] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:52.822 [2024-11-19 07:32:01.962453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:52.822 [2024-11-19 07:32:01.962460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:52.822 [2024-11-19 07:32:01.962468] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962474] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962480] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:52.822 [2024-11-19 07:32:01.962486] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:52.822 [2024-11-19 07:32:01.962491] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:52.822 [2024-11-19 07:32:01.962498] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:52.822 [2024-11-19 07:32:01.962504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.962509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:52.822 [2024-11-19 07:32:01.962516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:16:52.822 [2024-11-19 07:32:01.962521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.962570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.822 [2024-11-19 07:32:01.962576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:52.822 [2024-11-19 07:32:01.962582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:52.822 [2024-11-19 07:32:01.962587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.822 [2024-11-19 07:32:01.962643] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:52.822 [2024-11-19 07:32:01.962650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:52.822 [2024-11-19 07:32:01.962656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:52.822 [2024-11-19 07:32:01.962672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:52.822 [2024-11-19 07:32:01.962689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.822 [2024-11-19 07:32:01.962700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:52.822 [2024-11-19 07:32:01.962705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:52.822 [2024-11-19 07:32:01.962710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.822 [2024-11-19 07:32:01.962716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:52.822 [2024-11-19 07:32:01.962726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:52.822 [2024-11-19 07:32:01.962731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:52.822 [2024-11-19 07:32:01.962741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:52.822 [2024-11-19 07:32:01.962746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:52.822 [2024-11-19 07:32:01.962756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:52.822 [2024-11-19 07:32:01.962761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:52.822 [2024-11-19 07:32:01.962771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:52.822 [2024-11-19 07:32:01.962786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:52.822 [2024-11-19 07:32:01.962800] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:52.822 [2024-11-19 07:32:01.962815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:52.822 [2024-11-19 07:32:01.962830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.822 [2024-11-19 07:32:01.962839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:52.822 [2024-11-19 07:32:01.962844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:52.822 [2024-11-19 07:32:01.962849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.822 [2024-11-19 07:32:01.962853] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:52.822 [2024-11-19 07:32:01.962860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:52.822 [2024-11-19 07:32:01.962865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.822 [2024-11-19 07:32:01.962879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:52.822 [2024-11-19 07:32:01.962885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:52.822 [2024-11-19 07:32:01.962890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:52.822 [2024-11-19 07:32:01.962895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:52.822 [2024-11-19 07:32:01.962900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:52.822 [2024-11-19 07:32:01.962905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:52.822 [2024-11-19 07:32:01.962911] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:52.822 [2024-11-19 07:32:01.962918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.822 [2024-11-19 07:32:01.962927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:52.822 [2024-11-19 07:32:01.962933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:52.822 [2024-11-19 07:32:01.962938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:52.822 [2024-11-19 07:32:01.962944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:52.822 [2024-11-19 07:32:01.962949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:52.822 [2024-11-19 07:32:01.962955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:52.822 [2024-11-19 07:32:01.962960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:52.823 [2024-11-19 07:32:01.962965] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:52.823 [2024-11-19 07:32:01.962971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:52.823 [2024-11-19 07:32:01.962976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:52.823 [2024-11-19 07:32:01.962982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:52.823 [2024-11-19 07:32:01.962987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:52.823 [2024-11-19 07:32:01.962993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:52.823 [2024-11-19 07:32:01.962998] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:52.823 [2024-11-19 07:32:01.963008] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.823 [2024-11-19 07:32:01.963014] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:52.823 [2024-11-19 07:32:01.963020] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:52.823 [2024-11-19 07:32:01.963025] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:52.823 [2024-11-19 07:32:01.963030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:52.823 [2024-11-19 07:32:01.963036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:01.963042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:52.823 [2024-11-19 07:32:01.963047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:16:52.823 [2024-11-19 07:32:01.963053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:01.975276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:01.975299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.823 [2024-11-19 07:32:01.975307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.191 ms 00:16:52.823 [2024-11-19 07:32:01.975313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:01.975407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:01.975414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:52.823 [2024-11-19 07:32:01.975421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:52.823 [2024-11-19 07:32:01.975426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.011748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.011867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.823 [2024-11-19 07:32:02.011882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.304 ms 00:16:52.823 [2024-11-19 07:32:02.011890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.011952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.011961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.823 [2024-11-19 07:32:02.011971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.823 [2024-11-19 07:32:02.011977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.012285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.012299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.823 [2024-11-19 07:32:02.012305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:52.823 [2024-11-19 07:32:02.012311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.012406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.012413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.823 [2024-11-19 07:32:02.012419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:52.823 [2024-11-19 07:32:02.012425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.023797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.023823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.823 [2024-11-19 07:32:02.023831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.354 ms 00:16:52.823 [2024-11-19 07:32:02.023838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.033557] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:52.823 [2024-11-19 07:32:02.033587] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:52.823 [2024-11-19 07:32:02.033596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.033602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:52.823 [2024-11-19 07:32:02.033609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.679 ms 00:16:52.823 [2024-11-19 07:32:02.033615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.052211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.052238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:52.823 [2024-11-19 07:32:02.052251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.551 ms 00:16:52.823 [2024-11-19 07:32:02.052258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.061104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.061129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:52.823 [2024-11-19 07:32:02.061136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.795 ms 00:16:52.823 [2024-11-19 07:32:02.061146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.069851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.069876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:52.823 [2024-11-19 07:32:02.069883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.647 ms 00:16:52.823 [2024-11-19 07:32:02.069888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.823 [2024-11-19 07:32:02.070162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.823 [2024-11-19 07:32:02.070175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:52.823 [2024-11-19 07:32:02.070191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:52.823 [2024-11-19 07:32:02.070197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.115790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.115825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:53.084 [2024-11-19 07:32:02.115835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.575 ms 00:16:53.084 [2024-11-19 07:32:02.115841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.123778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:53.084 [2024-11-19 07:32:02.135330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.135360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:53.084 [2024-11-19 07:32:02.135370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.416 ms 00:16:53.084 [2024-11-19 07:32:02.135376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.135431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.135439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:53.084 [2024-11-19 07:32:02.135446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:53.084 [2024-11-19 07:32:02.135454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.135492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.135501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:53.084 [2024-11-19 07:32:02.135508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:53.084 [2024-11-19 07:32:02.135514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.136440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.136468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:53.084 [2024-11-19 07:32:02.136475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:16:53.084 [2024-11-19 07:32:02.136480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.136506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.136512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:53.084 [2024-11-19 07:32:02.136522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.084 [2024-11-19 07:32:02.136527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.136552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:53.084 [2024-11-19 07:32:02.136558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.136564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:53.084 [2024-11-19 07:32:02.136571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:53.084 [2024-11-19 07:32:02.136576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.154683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.154798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:53.084 [2024-11-19 07:32:02.154812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.091 ms 00:16:53.084 [2024-11-19 07:32:02.154819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.154884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.084 [2024-11-19 07:32:02.154892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:53.084 [2024-11-19 07:32:02.154898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:53.084 [2024-11-19 07:32:02.154905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.084 [2024-11-19 07:32:02.155571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.084 [2024-11-19 07:32:02.158009] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.721 ms, result 0 00:16:53.084 [2024-11-19 07:32:02.158735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.085 [2024-11-19 07:32:02.173688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.026  [2024-11-19T07:32:04.243Z] Copying: 39/256 [MB] (39 MBps) [2024-11-19T07:32:05.203Z] Copying: 56/256 [MB] (17 MBps) [2024-11-19T07:32:06.577Z] Copying: 81/256 [MB] (24 MBps) [2024-11-19T07:32:07.511Z] Copying: 117/256 [MB] (35 MBps) [2024-11-19T07:32:08.446Z] Copying: 163/256 [MB] (46 MBps) [2024-11-19T07:32:09.379Z] Copying: 183/256 [MB] (20 MBps) [2024-11-19T07:32:10.310Z] Copying: 202/256 [MB] (18 MBps) [2024-11-19T07:32:11.242Z] Copying: 221/256 [MB] (18 MBps) [2024-11-19T07:32:11.808Z] Copying: 242/256 [MB] (21 MBps) [2024-11-19T07:32:11.808Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-19 07:32:11.681096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.558 [2024-11-19 07:32:11.690208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.690238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.558 [2024-11-19 07:32:11.690256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:02.558 [2024-11-19 07:32:11.690264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.690286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:02.558 [2024-11-19 07:32:11.692749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.692772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.558 [2024-11-19 07:32:11.692783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:17:02.558 [2024-11-19 07:32:11.692790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.695284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.695309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.558 [2024-11-19 07:32:11.695317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.474 ms 00:17:02.558 [2024-11-19 07:32:11.695324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.703642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.703676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.558 [2024-11-19 07:32:11.703685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.296 ms 00:17:02.558 [2024-11-19 07:32:11.703692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.710589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.710612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:02.558 [2024-11-19 07:32:11.710621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:17:02.558 [2024-11-19 07:32:11.710628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.734206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.734231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.558 [2024-11-19 07:32:11.734241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.535 ms 00:17:02.558 [2024-11-19 07:32:11.734248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.748458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.748486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.558 [2024-11-19 07:32:11.748496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.170 ms 00:17:02.558 [2024-11-19 07:32:11.748503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.748640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.748650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.558 [2024-11-19 07:32:11.748658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:02.558 [2024-11-19 07:32:11.748666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.772514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.772539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:02.558 [2024-11-19 07:32:11.772548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.833 ms 00:17:02.558 [2024-11-19 07:32:11.772554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.558 [2024-11-19 07:32:11.795487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.558 [2024-11-19 07:32:11.795512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:02.558 [2024-11-19 07:32:11.795520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.891 ms 00:17:02.558 [2024-11-19 07:32:11.795527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.818 [2024-11-19 07:32:11.818371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.818 [2024-11-19 07:32:11.818395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.818 [2024-11-19 07:32:11.818404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:17:02.818 [2024-11-19 07:32:11.818410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.818 [2024-11-19 07:32:11.840986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.818 [2024-11-19 07:32:11.841009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.818 [2024-11-19 07:32:11.841018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.510 ms 00:17:02.818 [2024-11-19 07:32:11.841025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.818 [2024-11-19 07:32:11.841065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.818 [2024-11-19 07:32:11.841079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.818 [2024-11-19 07:32:11.841435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.819 [2024-11-19 07:32:11.841840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.819 [2024-11-19 07:32:11.841847] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:02.819 [2024-11-19 07:32:11.841855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.819 [2024-11-19 07:32:11.841863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.819 [2024-11-19 07:32:11.841870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.819 [2024-11-19 07:32:11.841877] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.819 [2024-11-19 07:32:11.841884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.819 [2024-11-19 07:32:11.841891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.819 [2024-11-19 07:32:11.841900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.819 [2024-11-19 07:32:11.841907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.819 [2024-11-19 07:32:11.841913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.819 [2024-11-19 07:32:11.841919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.819 [2024-11-19 07:32:11.841927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.819 [2024-11-19 07:32:11.841935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:17:02.819 [2024-11-19 07:32:11.841943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.854052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.819 [2024-11-19 07:32:11.854074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.819 [2024-11-19 07:32:11.854083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.093 ms 00:17:02.819 [2024-11-19 07:32:11.854094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.854318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.819 [2024-11-19 07:32:11.854331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.819 [2024-11-19 07:32:11.854339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:17:02.819 [2024-11-19 07:32:11.854346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.891656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.891682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.819 [2024-11-19 07:32:11.891692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.819 [2024-11-19 07:32:11.891703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.891776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.891784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.819 [2024-11-19 07:32:11.891792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.819 [2024-11-19 07:32:11.891798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.891835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.891844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.819 [2024-11-19 07:32:11.891852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.819 [2024-11-19 07:32:11.891858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.891877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.891884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.819 [2024-11-19 07:32:11.891891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.819 [2024-11-19 07:32:11.891898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.962868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.962895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.819 [2024-11-19 07:32:11.962905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.819 [2024-11-19 07:32:11.962915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.819 [2024-11-19 07:32:11.991835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.819 [2024-11-19 07:32:11.991862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.820 [2024-11-19 07:32:11.991872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.991879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.991925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.991933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.820 [2024-11-19 07:32:11.991941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.991948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.991976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.991987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.820 [2024-11-19 07:32:11.991995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.992002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.992084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.992094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.820 [2024-11-19 07:32:11.992101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.992108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.992136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.992147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.820 [2024-11-19 07:32:11.992154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.992161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.992209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.992218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.820 [2024-11-19 07:32:11.992226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.992233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.992273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.820 [2024-11-19 07:32:11.992285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.820 [2024-11-19 07:32:11.992296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.820 [2024-11-19 07:32:11.992303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.820 [2024-11-19 07:32:11.992426] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.236 ms, result 0 00:17:04.194 00:17:04.194 00:17:04.194 07:32:13 -- ftl/trim.sh@72 -- # svcpid=72398 00:17:04.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.194 07:32:13 -- ftl/trim.sh@73 -- # waitforlisten 72398 00:17:04.194 07:32:13 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:04.194 07:32:13 -- common/autotest_common.sh@829 -- # '[' -z 72398 ']' 00:17:04.194 07:32:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.194 07:32:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.194 07:32:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.194 07:32:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.194 07:32:13 -- common/autotest_common.sh@10 -- # set +x 00:17:04.194 [2024-11-19 07:32:13.167345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.194 [2024-11-19 07:32:13.167453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72398 ] 00:17:04.194 [2024-11-19 07:32:13.310011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.452 [2024-11-19 07:32:13.483450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.452 [2024-11-19 07:32:13.483656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.827 07:32:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.827 07:32:14 -- common/autotest_common.sh@862 -- # return 0 00:17:05.827 07:32:14 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:05.827 [2024-11-19 07:32:14.863994] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.827 [2024-11-19 07:32:14.864051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.827 [2024-11-19 07:32:15.027642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.027688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.827 [2024-11-19 07:32:15.027702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.827 [2024-11-19 07:32:15.027710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.030363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.030398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.827 [2024-11-19 07:32:15.030409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:17:05.827 [2024-11-19 07:32:15.030416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.030495] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.827 [2024-11-19 07:32:15.031202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.827 [2024-11-19 07:32:15.031225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.031233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.827 [2024-11-19 07:32:15.031243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:05.827 [2024-11-19 07:32:15.031250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.032346] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.827 [2024-11-19 07:32:15.044997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.045032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.827 [2024-11-19 07:32:15.045044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.656 ms 00:17:05.827 [2024-11-19 07:32:15.045052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.045127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.045139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.827 [2024-11-19 07:32:15.045151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:05.827 [2024-11-19 07:32:15.045160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.049896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.049930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.827 [2024-11-19 07:32:15.049938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:17:05.827 [2024-11-19 07:32:15.049949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.050023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.827 [2024-11-19 07:32:15.050033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.827 [2024-11-19 07:32:15.050042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:05.827 [2024-11-19 07:32:15.050051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.827 [2024-11-19 07:32:15.050076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.828 [2024-11-19 07:32:15.050087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.828 [2024-11-19 07:32:15.050095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:05.828 [2024-11-19 07:32:15.050103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.828 [2024-11-19 07:32:15.050129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.828 [2024-11-19 07:32:15.053577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.828 [2024-11-19 07:32:15.053603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.828 [2024-11-19 07:32:15.053614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:17:05.828 [2024-11-19 07:32:15.053621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.828 [2024-11-19 07:32:15.053658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.828 [2024-11-19 07:32:15.053666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.828 [2024-11-19 07:32:15.053675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:05.828 [2024-11-19 07:32:15.053684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.828 [2024-11-19 07:32:15.053709] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.828 [2024-11-19 07:32:15.053726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:05.828 [2024-11-19 07:32:15.053759] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.828 [2024-11-19 07:32:15.053774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:05.828 [2024-11-19 07:32:15.053849] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:05.828 [2024-11-19 07:32:15.053859] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.828 [2024-11-19 07:32:15.053872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:05.828 [2024-11-19 07:32:15.053882] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.828 [2024-11-19 07:32:15.053893] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.828 [2024-11-19 07:32:15.053900] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.828 [2024-11-19 07:32:15.053908] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.828 [2024-11-19 07:32:15.053915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:05.828 [2024-11-19 07:32:15.053926] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:05.828 [2024-11-19 07:32:15.053933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.828 [2024-11-19 07:32:15.053941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.828 [2024-11-19 07:32:15.053948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:17:05.828 [2024-11-19 07:32:15.053956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.828 [2024-11-19 07:32:15.054032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.828 [2024-11-19 07:32:15.054042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.828 [2024-11-19 07:32:15.054049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:05.828 [2024-11-19 07:32:15.054058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.828 [2024-11-19 07:32:15.054133] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.828 [2024-11-19 07:32:15.054144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.828 [2024-11-19 07:32:15.054151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.828 [2024-11-19 07:32:15.054176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.828 [2024-11-19 07:32:15.054218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.828 [2024-11-19 07:32:15.054234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.828 [2024-11-19 07:32:15.054242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.828 [2024-11-19 07:32:15.054249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.828 [2024-11-19 07:32:15.054257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.828 [2024-11-19 07:32:15.054264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:05.828 [2024-11-19 07:32:15.054271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.828 [2024-11-19 07:32:15.054286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:05.828 [2024-11-19 07:32:15.054292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:05.828 [2024-11-19 07:32:15.054307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:05.828 [2024-11-19 07:32:15.054314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.828 [2024-11-19 07:32:15.054330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.828 [2024-11-19 07:32:15.054355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.828 [2024-11-19 07:32:15.054378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.828 [2024-11-19 07:32:15.054399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.828 [2024-11-19 07:32:15.054420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.828 [2024-11-19 07:32:15.054434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.828 [2024-11-19 07:32:15.054441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:05.828 [2024-11-19 07:32:15.054450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.828 [2024-11-19 07:32:15.054455] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.828 [2024-11-19 07:32:15.054466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.828 [2024-11-19 07:32:15.054473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.828 [2024-11-19 07:32:15.054489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.828 [2024-11-19 07:32:15.054497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.828 [2024-11-19 07:32:15.054504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.828 [2024-11-19 07:32:15.054511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.828 [2024-11-19 07:32:15.054518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.828 [2024-11-19 07:32:15.054526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.828 [2024-11-19 07:32:15.054533] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.828 [2024-11-19 07:32:15.054544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.828 [2024-11-19 07:32:15.054552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.828 [2024-11-19 07:32:15.054560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:05.828 [2024-11-19 07:32:15.054567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.828 [2024-11-19 07:32:15.054578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:05.828 [2024-11-19 07:32:15.054585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:05.828 [2024-11-19 07:32:15.054594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:05.829 [2024-11-19 07:32:15.054600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:05.829 [2024-11-19 07:32:15.054608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:05.829 [2024-11-19 07:32:15.054615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:05.829 [2024-11-19 07:32:15.054623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:05.829 [2024-11-19 07:32:15.054630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:05.829 [2024-11-19 07:32:15.054638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:05.829 [2024-11-19 07:32:15.054646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:05.829 [2024-11-19 07:32:15.054654] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.829 [2024-11-19 07:32:15.054662] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.829 [2024-11-19 07:32:15.054671] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.829 [2024-11-19 07:32:15.054678] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.829 [2024-11-19 07:32:15.054686] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.829 [2024-11-19 07:32:15.054695] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.829 [2024-11-19 07:32:15.054717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.829 [2024-11-19 07:32:15.054724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.829 [2024-11-19 07:32:15.054733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:17:05.829 [2024-11-19 07:32:15.054740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.829 [2024-11-19 07:32:15.069367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.829 [2024-11-19 07:32:15.069400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.829 [2024-11-19 07:32:15.069415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.578 ms 00:17:05.829 [2024-11-19 07:32:15.069424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.829 [2024-11-19 07:32:15.069542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.829 [2024-11-19 07:32:15.069552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.829 [2024-11-19 07:32:15.069561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:05.829 [2024-11-19 07:32:15.069568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.099804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.099833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.088 [2024-11-19 07:32:15.099844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.214 ms 00:17:06.088 [2024-11-19 07:32:15.099851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.099905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.099915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.088 [2024-11-19 07:32:15.099925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:06.088 [2024-11-19 07:32:15.099932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.100258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.100276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.088 [2024-11-19 07:32:15.100287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:06.088 [2024-11-19 07:32:15.100294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.100409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.100421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.088 [2024-11-19 07:32:15.100433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:06.088 [2024-11-19 07:32:15.100439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.114923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.115050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.088 [2024-11-19 07:32:15.115070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:17:06.088 [2024-11-19 07:32:15.115077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.127726] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:06.088 [2024-11-19 07:32:15.127754] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.088 [2024-11-19 07:32:15.127767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.127774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.088 [2024-11-19 07:32:15.127784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.592 ms 00:17:06.088 [2024-11-19 07:32:15.127791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.152082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.152111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.088 [2024-11-19 07:32:15.152123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.227 ms 00:17:06.088 [2024-11-19 07:32:15.152130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.163850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.163882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.088 [2024-11-19 07:32:15.163893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.642 ms 00:17:06.088 [2024-11-19 07:32:15.163899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.175388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.175415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.088 [2024-11-19 07:32:15.175428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.429 ms 00:17:06.088 [2024-11-19 07:32:15.175435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.175784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.175795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.088 [2024-11-19 07:32:15.175807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:06.088 [2024-11-19 07:32:15.175813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.233243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.233411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:06.088 [2024-11-19 07:32:15.233433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.407 ms 00:17:06.088 [2024-11-19 07:32:15.233441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.244050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:06.088 [2024-11-19 07:32:15.257834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.257875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:06.088 [2024-11-19 07:32:15.257888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.097 ms 00:17:06.088 [2024-11-19 07:32:15.257897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.257959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.257972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:06.088 [2024-11-19 07:32:15.257981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:06.088 [2024-11-19 07:32:15.257992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.258040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.258049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:06.088 [2024-11-19 07:32:15.258057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:06.088 [2024-11-19 07:32:15.258065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.259225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.259349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:06.088 [2024-11-19 07:32:15.259364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:17:06.088 [2024-11-19 07:32:15.259373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.259405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.259417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:06.088 [2024-11-19 07:32:15.259425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:06.088 [2024-11-19 07:32:15.259433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.259467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:06.088 [2024-11-19 07:32:15.259479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.259486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:06.088 [2024-11-19 07:32:15.259494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:06.088 [2024-11-19 07:32:15.259502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.283139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.283266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:06.088 [2024-11-19 07:32:15.283286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.612 ms 00:17:06.088 [2024-11-19 07:32:15.283294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.283375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.088 [2024-11-19 07:32:15.283384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:06.088 [2024-11-19 07:32:15.283394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:06.088 [2024-11-19 07:32:15.283404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.088 [2024-11-19 07:32:15.284625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.088 [2024-11-19 07:32:15.287763] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.718 ms, result 0 00:17:06.088 [2024-11-19 07:32:15.290068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.088 Some configs were skipped because the RPC state that can call them passed over. 00:17:06.088 07:32:15 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:06.347 [2024-11-19 07:32:15.535467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.347 [2024-11-19 07:32:15.535516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:06.347 [2024-11-19 07:32:15.535529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:17:06.347 [2024-11-19 07:32:15.535539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.347 [2024-11-19 07:32:15.535576] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 24.961 ms, result 0 00:17:06.347 true 00:17:06.347 07:32:15 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:06.605 [2024-11-19 07:32:15.750082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.605 [2024-11-19 07:32:15.750229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:06.605 [2024-11-19 07:32:15.750247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.312 ms 00:17:06.605 [2024-11-19 07:32:15.750254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.605 [2024-11-19 07:32:15.750286] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.516 ms, result 0 00:17:06.605 true 00:17:06.605 07:32:15 -- ftl/trim.sh@81 -- # killprocess 72398 00:17:06.605 07:32:15 -- common/autotest_common.sh@936 -- # '[' -z 72398 ']' 00:17:06.605 07:32:15 -- common/autotest_common.sh@940 -- # kill -0 72398 00:17:06.605 07:32:15 -- common/autotest_common.sh@941 -- # uname 00:17:06.605 07:32:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.605 07:32:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72398 00:17:06.605 killing process with pid 72398 00:17:06.605 07:32:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:06.605 07:32:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:06.605 07:32:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72398' 00:17:06.605 07:32:15 -- common/autotest_common.sh@955 -- # kill 72398 00:17:06.605 07:32:15 -- common/autotest_common.sh@960 -- # wait 72398 00:17:07.173 [2024-11-19 07:32:16.314450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.173 [2024-11-19 07:32:16.314497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.173 [2024-11-19 07:32:16.314508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:07.173 [2024-11-19 07:32:16.314517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.173 [2024-11-19 07:32:16.314534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.173 [2024-11-19 07:32:16.316589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.173 [2024-11-19 07:32:16.316613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.173 [2024-11-19 07:32:16.316624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:17:07.173 [2024-11-19 07:32:16.316631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.173 [2024-11-19 07:32:16.316842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.173 [2024-11-19 07:32:16.316850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.173 [2024-11-19 07:32:16.316857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:07.173 [2024-11-19 07:32:16.316862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.173 [2024-11-19 07:32:16.320146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.173 [2024-11-19 07:32:16.320172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.173 [2024-11-19 07:32:16.320187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:17:07.173 [2024-11-19 07:32:16.320193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.173 [2024-11-19 07:32:16.325489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.173 [2024-11-19 07:32:16.325660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:07.173 [2024-11-19 07:32:16.325674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:17:07.173 [2024-11-19 07:32:16.325682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.173 [2024-11-19 07:32:16.333102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.333209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.174 [2024-11-19 07:32:16.333225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.376 ms 00:17:07.174 [2024-11-19 07:32:16.333231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.339791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.339891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.174 [2024-11-19 07:32:16.339905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.531 ms 00:17:07.174 [2024-11-19 07:32:16.339911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.340013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.340021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.174 [2024-11-19 07:32:16.340028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:07.174 [2024-11-19 07:32:16.340034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.348021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.348045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:07.174 [2024-11-19 07:32:16.348053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.972 ms 00:17:07.174 [2024-11-19 07:32:16.348059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.355435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.355460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:07.174 [2024-11-19 07:32:16.355472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.348 ms 00:17:07.174 [2024-11-19 07:32:16.355477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.362685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.362779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.174 [2024-11-19 07:32:16.362793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.179 ms 00:17:07.174 [2024-11-19 07:32:16.362798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.369900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.174 [2024-11-19 07:32:16.370020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.174 [2024-11-19 07:32:16.370034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.043 ms 00:17:07.174 [2024-11-19 07:32:16.370039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.174 [2024-11-19 07:32:16.370070] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.174 [2024-11-19 07:32:16.370081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.174 [2024-11-19 07:32:16.370515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.175 [2024-11-19 07:32:16.370742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.175 [2024-11-19 07:32:16.370750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:07.175 [2024-11-19 07:32:16.370756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.175 [2024-11-19 07:32:16.370762] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.175 [2024-11-19 07:32:16.370767] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.175 [2024-11-19 07:32:16.370774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.175 [2024-11-19 07:32:16.370779] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.175 [2024-11-19 07:32:16.370786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.175 [2024-11-19 07:32:16.370792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.175 [2024-11-19 07:32:16.370798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.175 [2024-11-19 07:32:16.370803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.175 [2024-11-19 07:32:16.370809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.175 [2024-11-19 07:32:16.370815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.175 [2024-11-19 07:32:16.370822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:17:07.175 [2024-11-19 07:32:16.370829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.380302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.175 [2024-11-19 07:32:16.380398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:07.175 [2024-11-19 07:32:16.380413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.458 ms 00:17:07.175 [2024-11-19 07:32:16.380419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.380580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.175 [2024-11-19 07:32:16.380588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:07.175 [2024-11-19 07:32:16.380597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:07.175 [2024-11-19 07:32:16.380602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.415160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.175 [2024-11-19 07:32:16.415198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.175 [2024-11-19 07:32:16.415207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.175 [2024-11-19 07:32:16.415214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.415280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.175 [2024-11-19 07:32:16.415286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.175 [2024-11-19 07:32:16.415295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.175 [2024-11-19 07:32:16.415301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.415348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.175 [2024-11-19 07:32:16.415355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.175 [2024-11-19 07:32:16.415364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.175 [2024-11-19 07:32:16.415369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.175 [2024-11-19 07:32:16.415384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.175 [2024-11-19 07:32:16.415390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.175 [2024-11-19 07:32:16.415397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.175 [2024-11-19 07:32:16.415403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.434 [2024-11-19 07:32:16.476432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.434 [2024-11-19 07:32:16.476463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.434 [2024-11-19 07:32:16.476473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.434 [2024-11-19 07:32:16.476480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.434 [2024-11-19 07:32:16.498923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.434 [2024-11-19 07:32:16.499024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.434 [2024-11-19 07:32:16.499039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.434 [2024-11-19 07:32:16.499045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.434 [2024-11-19 07:32:16.499092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.434 [2024-11-19 07:32:16.499099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.434 [2024-11-19 07:32:16.499107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.434 [2024-11-19 07:32:16.499113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.434 [2024-11-19 07:32:16.499136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.434 [2024-11-19 07:32:16.499142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.434 [2024-11-19 07:32:16.499149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.434 [2024-11-19 07:32:16.499154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.434 [2024-11-19 07:32:16.499245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.434 [2024-11-19 07:32:16.499254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.434 [2024-11-19 07:32:16.499261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.434 [2024-11-19 07:32:16.499266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.435 [2024-11-19 07:32:16.499295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.435 [2024-11-19 07:32:16.499302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:07.435 [2024-11-19 07:32:16.499309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.435 [2024-11-19 07:32:16.499314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.435 [2024-11-19 07:32:16.499346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.435 [2024-11-19 07:32:16.499352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.435 [2024-11-19 07:32:16.499360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.435 [2024-11-19 07:32:16.499366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.435 [2024-11-19 07:32:16.499399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.435 [2024-11-19 07:32:16.499406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.435 [2024-11-19 07:32:16.499413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.435 [2024-11-19 07:32:16.499419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.435 [2024-11-19 07:32:16.499524] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 185.058 ms, result 0 00:17:08.370 07:32:17 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:08.370 07:32:17 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:08.370 [2024-11-19 07:32:17.405495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.370 [2024-11-19 07:32:17.405607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72459 ] 00:17:08.370 [2024-11-19 07:32:17.553063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.628 [2024-11-19 07:32:17.726897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.889 [2024-11-19 07:32:17.975647] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.889 [2024-11-19 07:32:17.975713] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.889 [2024-11-19 07:32:18.126823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.889 [2024-11-19 07:32:18.126868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.889 [2024-11-19 07:32:18.126881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.889 [2024-11-19 07:32:18.126889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-19 07:32:18.129492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.889 [2024-11-19 07:32:18.129622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.889 [2024-11-19 07:32:18.129638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:17:08.889 [2024-11-19 07:32:18.129646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-19 07:32:18.130024] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.889 [2024-11-19 07:32:18.130795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.889 [2024-11-19 07:32:18.130825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.889 [2024-11-19 07:32:18.130834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.889 [2024-11-19 07:32:18.130843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:17:08.889 [2024-11-19 07:32:18.130850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-19 07:32:18.132243] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:09.150 [2024-11-19 07:32:18.144862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.145004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:09.150 [2024-11-19 07:32:18.145022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:17:09.150 [2024-11-19 07:32:18.145030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.145410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.145436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:09.150 [2024-11-19 07:32:18.145446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:09.150 [2024-11-19 07:32:18.145454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.150482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.150513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.150 [2024-11-19 07:32:18.150523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.979 ms 00:17:09.150 [2024-11-19 07:32:18.150534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.150634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.150644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.150 [2024-11-19 07:32:18.150652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:09.150 [2024-11-19 07:32:18.150659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.150683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.150691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:09.150 [2024-11-19 07:32:18.150699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:09.150 [2024-11-19 07:32:18.150705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.150734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:09.150 [2024-11-19 07:32:18.154232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.154260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.150 [2024-11-19 07:32:18.154269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:17:09.150 [2024-11-19 07:32:18.154278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.150 [2024-11-19 07:32:18.154314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.150 [2024-11-19 07:32:18.154321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:09.150 [2024-11-19 07:32:18.154329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:09.150 [2024-11-19 07:32:18.154336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.151 [2024-11-19 07:32:18.154352] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:09.151 [2024-11-19 07:32:18.154370] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:09.151 [2024-11-19 07:32:18.154401] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:09.151 [2024-11-19 07:32:18.154418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:09.151 [2024-11-19 07:32:18.154490] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:09.151 [2024-11-19 07:32:18.154500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:09.151 [2024-11-19 07:32:18.154509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:09.151 [2024-11-19 07:32:18.154518] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154527] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154534] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:09.151 [2024-11-19 07:32:18.154541] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:09.151 [2024-11-19 07:32:18.154548] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:09.151 [2024-11-19 07:32:18.154557] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:09.151 [2024-11-19 07:32:18.154564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.151 [2024-11-19 07:32:18.154571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:09.151 [2024-11-19 07:32:18.154578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:09.151 [2024-11-19 07:32:18.154585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.151 [2024-11-19 07:32:18.154658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.151 [2024-11-19 07:32:18.154667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:09.151 [2024-11-19 07:32:18.154674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:09.151 [2024-11-19 07:32:18.154680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.151 [2024-11-19 07:32:18.154755] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:09.151 [2024-11-19 07:32:18.154764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:09.151 [2024-11-19 07:32:18.154772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:09.151 [2024-11-19 07:32:18.154793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:09.151 [2024-11-19 07:32:18.154814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.151 [2024-11-19 07:32:18.154828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:09.151 [2024-11-19 07:32:18.154834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:09.151 [2024-11-19 07:32:18.154841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.151 [2024-11-19 07:32:18.154849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:09.151 [2024-11-19 07:32:18.154861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:09.151 [2024-11-19 07:32:18.154867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:09.151 [2024-11-19 07:32:18.154880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:09.151 [2024-11-19 07:32:18.154886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:09.151 [2024-11-19 07:32:18.154899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:09.151 [2024-11-19 07:32:18.154905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:09.151 [2024-11-19 07:32:18.154918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:09.151 [2024-11-19 07:32:18.154936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:09.151 [2024-11-19 07:32:18.154955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:09.151 [2024-11-19 07:32:18.154975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:09.151 [2024-11-19 07:32:18.154981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:09.151 [2024-11-19 07:32:18.154987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:09.151 [2024-11-19 07:32:18.154994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:09.151 [2024-11-19 07:32:18.155000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.151 [2024-11-19 07:32:18.155006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:09.151 [2024-11-19 07:32:18.155012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:09.151 [2024-11-19 07:32:18.155019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.151 [2024-11-19 07:32:18.155025] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:09.151 [2024-11-19 07:32:18.155032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:09.151 [2024-11-19 07:32:18.155039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.151 [2024-11-19 07:32:18.155050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.151 [2024-11-19 07:32:18.155063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:09.151 [2024-11-19 07:32:18.155070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:09.151 [2024-11-19 07:32:18.155076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:09.151 [2024-11-19 07:32:18.155083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:09.151 [2024-11-19 07:32:18.155089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:09.151 [2024-11-19 07:32:18.155096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:09.151 [2024-11-19 07:32:18.155103] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:09.151 [2024-11-19 07:32:18.155112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.151 [2024-11-19 07:32:18.155120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:09.151 [2024-11-19 07:32:18.155127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:09.151 [2024-11-19 07:32:18.155134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:09.151 [2024-11-19 07:32:18.155141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:09.151 [2024-11-19 07:32:18.155149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:09.151 [2024-11-19 07:32:18.155155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:09.151 [2024-11-19 07:32:18.155162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:09.151 [2024-11-19 07:32:18.155169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:09.151 [2024-11-19 07:32:18.155195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:09.151 [2024-11-19 07:32:18.155203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:09.151 [2024-11-19 07:32:18.155210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:09.151 [2024-11-19 07:32:18.155217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:09.151 [2024-11-19 07:32:18.155226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:09.151 [2024-11-19 07:32:18.155233] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:09.151 [2024-11-19 07:32:18.155243] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.151 [2024-11-19 07:32:18.155251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:09.151 [2024-11-19 07:32:18.155258] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:09.151 [2024-11-19 07:32:18.155264] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:09.151 [2024-11-19 07:32:18.155272] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:09.151 [2024-11-19 07:32:18.155281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.151 [2024-11-19 07:32:18.155288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:09.151 [2024-11-19 07:32:18.155295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:09.151 [2024-11-19 07:32:18.155302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.151 [2024-11-19 07:32:18.170296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.170326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.152 [2024-11-19 07:32:18.170336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.947 ms 00:17:09.152 [2024-11-19 07:32:18.170343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.170457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.170467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.152 [2024-11-19 07:32:18.170475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:09.152 [2024-11-19 07:32:18.170481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.212073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.212109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.152 [2024-11-19 07:32:18.212120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.571 ms 00:17:09.152 [2024-11-19 07:32:18.212128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.212209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.212221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.152 [2024-11-19 07:32:18.212232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.152 [2024-11-19 07:32:18.212239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.212564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.212578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.152 [2024-11-19 07:32:18.212587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:09.152 [2024-11-19 07:32:18.212594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.212710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.212719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.152 [2024-11-19 07:32:18.212726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:09.152 [2024-11-19 07:32:18.212733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.226745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.226875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.152 [2024-11-19 07:32:18.226890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.990 ms 00:17:09.152 [2024-11-19 07:32:18.226901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.239636] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:09.152 [2024-11-19 07:32:18.239754] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:09.152 [2024-11-19 07:32:18.239813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.239833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:09.152 [2024-11-19 07:32:18.239853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.816 ms 00:17:09.152 [2024-11-19 07:32:18.239870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.264277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.264400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:09.152 [2024-11-19 07:32:18.264450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.337 ms 00:17:09.152 [2024-11-19 07:32:18.264471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.276573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.276702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:09.152 [2024-11-19 07:32:18.276766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.773 ms 00:17:09.152 [2024-11-19 07:32:18.276788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.288866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.288996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:09.152 [2024-11-19 07:32:18.289052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.773 ms 00:17:09.152 [2024-11-19 07:32:18.289074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.290618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.290949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.152 [2024-11-19 07:32:18.291117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:17:09.152 [2024-11-19 07:32:18.291331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.361619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.361776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:09.152 [2024-11-19 07:32:18.361833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.164 ms 00:17:09.152 [2024-11-19 07:32:18.361862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.372622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.152 [2024-11-19 07:32:18.387334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.387459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.152 [2024-11-19 07:32:18.387508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.387 ms 00:17:09.152 [2024-11-19 07:32:18.387530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.387609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.387637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:09.152 [2024-11-19 07:32:18.387657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:09.152 [2024-11-19 07:32:18.387679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.387740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.387814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.152 [2024-11-19 07:32:18.387838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:09.152 [2024-11-19 07:32:18.387856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.389050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.389155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:09.152 [2024-11-19 07:32:18.389240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:17:09.152 [2024-11-19 07:32:18.389263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.389310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.389331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.152 [2024-11-19 07:32:18.389350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.152 [2024-11-19 07:32:18.389372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.152 [2024-11-19 07:32:18.389419] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:09.152 [2024-11-19 07:32:18.389441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.152 [2024-11-19 07:32:18.389460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:09.152 [2024-11-19 07:32:18.389590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:09.152 [2024-11-19 07:32:18.389613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.411 [2024-11-19 07:32:18.413951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.411 [2024-11-19 07:32:18.414068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.411 [2024-11-19 07:32:18.414121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.302 ms 00:17:09.411 [2024-11-19 07:32:18.414144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.411 [2024-11-19 07:32:18.414504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.411 [2024-11-19 07:32:18.414563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.411 [2024-11-19 07:32:18.414587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:09.411 [2024-11-19 07:32:18.414607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.411 [2024-11-19 07:32:18.415725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.411 [2024-11-19 07:32:18.419304] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.624 ms, result 0 00:17:09.411 [2024-11-19 07:32:18.420539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.411 [2024-11-19 07:32:18.433901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.360  [2024-11-19T07:32:20.544Z] Copying: 24/256 [MB] (24 MBps) [2024-11-19T07:32:21.480Z] Copying: 59/256 [MB] (34 MBps) [2024-11-19T07:32:22.858Z] Copying: 78/256 [MB] (18 MBps) [2024-11-19T07:32:23.793Z] Copying: 104/256 [MB] (26 MBps) [2024-11-19T07:32:24.728Z] Copying: 125/256 [MB] (20 MBps) [2024-11-19T07:32:25.663Z] Copying: 140/256 [MB] (15 MBps) [2024-11-19T07:32:26.598Z] Copying: 165/256 [MB] (24 MBps) [2024-11-19T07:32:27.533Z] Copying: 191/256 [MB] (26 MBps) [2024-11-19T07:32:28.468Z] Copying: 213/256 [MB] (21 MBps) [2024-11-19T07:32:29.846Z] Copying: 228/256 [MB] (15 MBps) [2024-11-19T07:32:30.104Z] Copying: 244/256 [MB] (15 MBps) [2024-11-19T07:32:30.104Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-19 07:32:29.995037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.854 [2024-11-19 07:32:30.004568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.004605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.854 [2024-11-19 07:32:30.004617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:20.854 [2024-11-19 07:32:30.004626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.004647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.854 [2024-11-19 07:32:30.007270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.007302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.854 [2024-11-19 07:32:30.007312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:17:20.854 [2024-11-19 07:32:30.007320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.007580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.007589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.854 [2024-11-19 07:32:30.007600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:20.854 [2024-11-19 07:32:30.007607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.011312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.011337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.854 [2024-11-19 07:32:30.011346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:17:20.854 [2024-11-19 07:32:30.011354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.018245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.018278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:20.854 [2024-11-19 07:32:30.018288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.863 ms 00:17:20.854 [2024-11-19 07:32:30.018300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.042628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.042669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.854 [2024-11-19 07:32:30.042681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.271 ms 00:17:20.854 [2024-11-19 07:32:30.042689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.057476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.057510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.854 [2024-11-19 07:32:30.057520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.740 ms 00:17:20.854 [2024-11-19 07:32:30.057527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.057671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.057681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.854 [2024-11-19 07:32:30.057689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:20.854 [2024-11-19 07:32:30.057696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.082133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.082167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:20.854 [2024-11-19 07:32:30.082177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:17:20.854 [2024-11-19 07:32:30.082191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.854 [2024-11-19 07:32:30.106168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.854 [2024-11-19 07:32:30.106218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:20.854 [2024-11-19 07:32:30.106227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.935 ms 00:17:20.854 [2024-11-19 07:32:30.106233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.115 [2024-11-19 07:32:30.129438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.115 [2024-11-19 07:32:30.129472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.115 [2024-11-19 07:32:30.129481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:17:21.115 [2024-11-19 07:32:30.129487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.115 [2024-11-19 07:32:30.152516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.115 [2024-11-19 07:32:30.152550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.115 [2024-11-19 07:32:30.152560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.960 ms 00:17:21.115 [2024-11-19 07:32:30.152566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.115 [2024-11-19 07:32:30.152609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.115 [2024-11-19 07:32:30.152623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.152998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.115 [2024-11-19 07:32:30.153005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.116 [2024-11-19 07:32:30.153414] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.116 [2024-11-19 07:32:30.153422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:21.116 [2024-11-19 07:32:30.153430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.116 [2024-11-19 07:32:30.153437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.116 [2024-11-19 07:32:30.153444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.116 [2024-11-19 07:32:30.153453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.116 [2024-11-19 07:32:30.153462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.116 [2024-11-19 07:32:30.153469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.116 [2024-11-19 07:32:30.153475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.116 [2024-11-19 07:32:30.153482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.116 [2024-11-19 07:32:30.153488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.116 [2024-11-19 07:32:30.153494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.116 [2024-11-19 07:32:30.153502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.116 [2024-11-19 07:32:30.153510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:17:21.116 [2024-11-19 07:32:30.153516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.165830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.116 [2024-11-19 07:32:30.165859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.116 [2024-11-19 07:32:30.165872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:17:21.116 [2024-11-19 07:32:30.165880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.166083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.116 [2024-11-19 07:32:30.166092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.116 [2024-11-19 07:32:30.166100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:17:21.116 [2024-11-19 07:32:30.166106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.203224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.203263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.116 [2024-11-19 07:32:30.203272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.203279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.203353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.203361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.116 [2024-11-19 07:32:30.203369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.203376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.203412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.203421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.116 [2024-11-19 07:32:30.203431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.203438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.203455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.203463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.116 [2024-11-19 07:32:30.203470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.203477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.277567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.277629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.116 [2024-11-19 07:32:30.277653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.277674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.306737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.306779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.116 [2024-11-19 07:32:30.306789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.306797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.306852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.306861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.116 [2024-11-19 07:32:30.306868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.116 [2024-11-19 07:32:30.306880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.116 [2024-11-19 07:32:30.306909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.116 [2024-11-19 07:32:30.306917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.117 [2024-11-19 07:32:30.306925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.117 [2024-11-19 07:32:30.306933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.117 [2024-11-19 07:32:30.307016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.117 [2024-11-19 07:32:30.307026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.117 [2024-11-19 07:32:30.307033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.117 [2024-11-19 07:32:30.307040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.117 [2024-11-19 07:32:30.307075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.117 [2024-11-19 07:32:30.307083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.117 [2024-11-19 07:32:30.307091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.117 [2024-11-19 07:32:30.307098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.117 [2024-11-19 07:32:30.307131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.117 [2024-11-19 07:32:30.307139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.117 [2024-11-19 07:32:30.307147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.117 [2024-11-19 07:32:30.307154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.117 [2024-11-19 07:32:30.307229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.117 [2024-11-19 07:32:30.307242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.117 [2024-11-19 07:32:30.307250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.117 [2024-11-19 07:32:30.307256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.117 [2024-11-19 07:32:30.307387] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.827 ms, result 0 00:17:22.050 00:17:22.050 00:17:22.050 07:32:31 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:22.050 07:32:31 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.617 07:32:31 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.617 [2024-11-19 07:32:31.741472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.617 [2024-11-19 07:32:31.741583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72613 ] 00:17:22.875 [2024-11-19 07:32:31.890458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.875 [2024-11-19 07:32:32.067371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.133 [2024-11-19 07:32:32.316869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.133 [2024-11-19 07:32:32.316929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.392 [2024-11-19 07:32:32.467895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.467938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.392 [2024-11-19 07:32:32.467950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.392 [2024-11-19 07:32:32.467957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.470626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.470662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.392 [2024-11-19 07:32:32.470672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:17:23.392 [2024-11-19 07:32:32.470679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.470748] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.392 [2024-11-19 07:32:32.471461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.392 [2024-11-19 07:32:32.471484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.471492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.392 [2024-11-19 07:32:32.471500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:23.392 [2024-11-19 07:32:32.471507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.472599] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.392 [2024-11-19 07:32:32.485480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.485513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.392 [2024-11-19 07:32:32.485523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:17:23.392 [2024-11-19 07:32:32.485531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.485607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.485618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.392 [2024-11-19 07:32:32.485625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:23.392 [2024-11-19 07:32:32.485633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.490393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.490426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.392 [2024-11-19 07:32:32.490436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.719 ms 00:17:23.392 [2024-11-19 07:32:32.490447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.490545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.490555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.392 [2024-11-19 07:32:32.490563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:23.392 [2024-11-19 07:32:32.490570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.490593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.490601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.392 [2024-11-19 07:32:32.490609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.392 [2024-11-19 07:32:32.490616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.490644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.392 [2024-11-19 07:32:32.494075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.494104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.392 [2024-11-19 07:32:32.494113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.444 ms 00:17:23.392 [2024-11-19 07:32:32.494122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.494157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.392 [2024-11-19 07:32:32.494165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.392 [2024-11-19 07:32:32.494172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:23.392 [2024-11-19 07:32:32.494189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.392 [2024-11-19 07:32:32.494207] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.392 [2024-11-19 07:32:32.494223] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:23.393 [2024-11-19 07:32:32.494254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.393 [2024-11-19 07:32:32.494271] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:23.393 [2024-11-19 07:32:32.494341] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:23.393 [2024-11-19 07:32:32.494351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.393 [2024-11-19 07:32:32.494360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:23.393 [2024-11-19 07:32:32.494369] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494377] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494384] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.393 [2024-11-19 07:32:32.494391] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.393 [2024-11-19 07:32:32.494397] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:23.393 [2024-11-19 07:32:32.494407] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:23.393 [2024-11-19 07:32:32.494414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.393 [2024-11-19 07:32:32.494421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.393 [2024-11-19 07:32:32.494429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:23.393 [2024-11-19 07:32:32.494435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.393 [2024-11-19 07:32:32.494499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.393 [2024-11-19 07:32:32.494507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.393 [2024-11-19 07:32:32.494514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:23.393 [2024-11-19 07:32:32.494521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.393 [2024-11-19 07:32:32.494606] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.393 [2024-11-19 07:32:32.494623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.393 [2024-11-19 07:32:32.494631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.393 [2024-11-19 07:32:32.494653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.393 [2024-11-19 07:32:32.494673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.393 [2024-11-19 07:32:32.494688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.393 [2024-11-19 07:32:32.494695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.393 [2024-11-19 07:32:32.494702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.393 [2024-11-19 07:32:32.494708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.393 [2024-11-19 07:32:32.494720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:23.393 [2024-11-19 07:32:32.494727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.393 [2024-11-19 07:32:32.494739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:23.393 [2024-11-19 07:32:32.494746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:23.393 [2024-11-19 07:32:32.494760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:23.393 [2024-11-19 07:32:32.494766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.393 [2024-11-19 07:32:32.494779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.393 [2024-11-19 07:32:32.494799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.393 [2024-11-19 07:32:32.494818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.393 [2024-11-19 07:32:32.494838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.393 [2024-11-19 07:32:32.494856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.393 [2024-11-19 07:32:32.494868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.393 [2024-11-19 07:32:32.494875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:23.393 [2024-11-19 07:32:32.494881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.393 [2024-11-19 07:32:32.494887] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.393 [2024-11-19 07:32:32.494894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.393 [2024-11-19 07:32:32.494901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.393 [2024-11-19 07:32:32.494918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.393 [2024-11-19 07:32:32.494924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.393 [2024-11-19 07:32:32.494930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.393 [2024-11-19 07:32:32.494937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.393 [2024-11-19 07:32:32.494943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.393 [2024-11-19 07:32:32.494949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.393 [2024-11-19 07:32:32.494957] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.393 [2024-11-19 07:32:32.494966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.393 [2024-11-19 07:32:32.494974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.393 [2024-11-19 07:32:32.494982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:23.393 [2024-11-19 07:32:32.494989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.393 [2024-11-19 07:32:32.494996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:23.393 [2024-11-19 07:32:32.495003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:23.393 [2024-11-19 07:32:32.495009] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:23.393 [2024-11-19 07:32:32.495016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:23.393 [2024-11-19 07:32:32.495023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:23.393 [2024-11-19 07:32:32.495030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:23.393 [2024-11-19 07:32:32.495037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:23.393 [2024-11-19 07:32:32.495045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:23.393 [2024-11-19 07:32:32.495053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:23.393 [2024-11-19 07:32:32.495060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:23.393 [2024-11-19 07:32:32.495067] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.393 [2024-11-19 07:32:32.495078] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.393 [2024-11-19 07:32:32.495086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.393 [2024-11-19 07:32:32.495094] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.393 [2024-11-19 07:32:32.495101] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.393 [2024-11-19 07:32:32.495108] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.393 [2024-11-19 07:32:32.495115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.393 [2024-11-19 07:32:32.495123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.393 [2024-11-19 07:32:32.495130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:23.393 [2024-11-19 07:32:32.495137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.393 [2024-11-19 07:32:32.509905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.393 [2024-11-19 07:32:32.509939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.393 [2024-11-19 07:32:32.509949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.727 ms 00:17:23.394 [2024-11-19 07:32:32.509956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.510067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.510076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.394 [2024-11-19 07:32:32.510083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:23.394 [2024-11-19 07:32:32.510090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.547720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.547757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.394 [2024-11-19 07:32:32.547768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.609 ms 00:17:23.394 [2024-11-19 07:32:32.547775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.547839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.547849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.394 [2024-11-19 07:32:32.547861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.394 [2024-11-19 07:32:32.547868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.548190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.548211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.394 [2024-11-19 07:32:32.548220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:23.394 [2024-11-19 07:32:32.548227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.548344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.548353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.394 [2024-11-19 07:32:32.548361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:23.394 [2024-11-19 07:32:32.548368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.562167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.562215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.394 [2024-11-19 07:32:32.562224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.777 ms 00:17:23.394 [2024-11-19 07:32:32.562234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.574971] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:23.394 [2024-11-19 07:32:32.575003] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.394 [2024-11-19 07:32:32.575013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.575020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.394 [2024-11-19 07:32:32.575029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.689 ms 00:17:23.394 [2024-11-19 07:32:32.575035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.599702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.599738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.394 [2024-11-19 07:32:32.599748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.605 ms 00:17:23.394 [2024-11-19 07:32:32.599755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.611679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.611706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.394 [2024-11-19 07:32:32.611721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.863 ms 00:17:23.394 [2024-11-19 07:32:32.611728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.623563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.623591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.394 [2024-11-19 07:32:32.623600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.776 ms 00:17:23.394 [2024-11-19 07:32:32.623607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.394 [2024-11-19 07:32:32.623960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.394 [2024-11-19 07:32:32.623979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.394 [2024-11-19 07:32:32.623988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:23.394 [2024-11-19 07:32:32.623998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.681530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.681568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.653 [2024-11-19 07:32:32.681580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.511 ms 00:17:23.653 [2024-11-19 07:32:32.681592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.692013] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.653 [2024-11-19 07:32:32.705370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.705402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.653 [2024-11-19 07:32:32.705413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.707 ms 00:17:23.653 [2024-11-19 07:32:32.705420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.705479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.705489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.653 [2024-11-19 07:32:32.705500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.653 [2024-11-19 07:32:32.705507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.705551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.705560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.653 [2024-11-19 07:32:32.705568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:23.653 [2024-11-19 07:32:32.705575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.706721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.706747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:23.653 [2024-11-19 07:32:32.706756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:17:23.653 [2024-11-19 07:32:32.706763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.706790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.706801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.653 [2024-11-19 07:32:32.706808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.653 [2024-11-19 07:32:32.706815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.706845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.653 [2024-11-19 07:32:32.706854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.706861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.653 [2024-11-19 07:32:32.706869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.653 [2024-11-19 07:32:32.706876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.730532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.730563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.653 [2024-11-19 07:32:32.730573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.635 ms 00:17:23.653 [2024-11-19 07:32:32.730581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.730660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.653 [2024-11-19 07:32:32.730670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.653 [2024-11-19 07:32:32.730678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:23.653 [2024-11-19 07:32:32.730685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.653 [2024-11-19 07:32:32.731466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.653 [2024-11-19 07:32:32.734561] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.290 ms, result 0 00:17:23.653 [2024-11-19 07:32:32.735771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.653 [2024-11-19 07:32:32.748856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.912  [2024-11-19T07:32:33.162Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-11-19 07:32:33.019904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.912 [2024-11-19 07:32:33.028379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.028416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.912 [2024-11-19 07:32:33.028426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.912 [2024-11-19 07:32:33.028434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.028454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.912 [2024-11-19 07:32:33.030888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.030916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.912 [2024-11-19 07:32:33.030926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:17:23.912 [2024-11-19 07:32:33.030932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.033243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.033271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.912 [2024-11-19 07:32:33.033280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:17:23.912 [2024-11-19 07:32:33.033291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.037647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.037674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.912 [2024-11-19 07:32:33.037683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:17:23.912 [2024-11-19 07:32:33.037690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.044510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.044537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:23.912 [2024-11-19 07:32:33.044547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.794 ms 00:17:23.912 [2024-11-19 07:32:33.044559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.067700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.067730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.912 [2024-11-19 07:32:33.067740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.090 ms 00:17:23.912 [2024-11-19 07:32:33.067746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.082090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.082122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.912 [2024-11-19 07:32:33.082134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 00:17:23.912 [2024-11-19 07:32:33.082142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.082290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.082301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.912 [2024-11-19 07:32:33.082309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:23.912 [2024-11-19 07:32:33.082316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.105971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.106002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:23.912 [2024-11-19 07:32:33.106012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.640 ms 00:17:23.912 [2024-11-19 07:32:33.106018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.129126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.129153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:23.912 [2024-11-19 07:32:33.129163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.067 ms 00:17:23.912 [2024-11-19 07:32:33.129171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.912 [2024-11-19 07:32:33.152419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.912 [2024-11-19 07:32:33.152455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.912 [2024-11-19 07:32:33.152468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:17:23.912 [2024-11-19 07:32:33.152475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.171 [2024-11-19 07:32:33.175138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.171 [2024-11-19 07:32:33.175172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.171 [2024-11-19 07:32:33.175199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.590 ms 00:17:24.171 [2024-11-19 07:32:33.175207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.171 [2024-11-19 07:32:33.175248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.171 [2024-11-19 07:32:33.175263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.171 [2024-11-19 07:32:33.175272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.171 [2024-11-19 07:32:33.175279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.171 [2024-11-19 07:32:33.175288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.172 [2024-11-19 07:32:33.175918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.173 [2024-11-19 07:32:33.175998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.173 [2024-11-19 07:32:33.176006] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:24.173 [2024-11-19 07:32:33.176013] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.173 [2024-11-19 07:32:33.176020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.173 [2024-11-19 07:32:33.176027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.173 [2024-11-19 07:32:33.176034] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.173 [2024-11-19 07:32:33.176043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.173 [2024-11-19 07:32:33.176051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.173 [2024-11-19 07:32:33.176058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.173 [2024-11-19 07:32:33.176064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.173 [2024-11-19 07:32:33.176070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.173 [2024-11-19 07:32:33.176077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.173 [2024-11-19 07:32:33.176084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.173 [2024-11-19 07:32:33.176092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:17:24.173 [2024-11-19 07:32:33.176104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.188627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.173 [2024-11-19 07:32:33.188655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.173 [2024-11-19 07:32:33.188670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.505 ms 00:17:24.173 [2024-11-19 07:32:33.188677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.188885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.173 [2024-11-19 07:32:33.188894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.173 [2024-11-19 07:32:33.188902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:24.173 [2024-11-19 07:32:33.188908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.225781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.225819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.173 [2024-11-19 07:32:33.225833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.225841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.225922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.225931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.173 [2024-11-19 07:32:33.225938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.225945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.225984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.225993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.173 [2024-11-19 07:32:33.226000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.226010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.226028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.226035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.173 [2024-11-19 07:32:33.226042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.226049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.299113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.299158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.173 [2024-11-19 07:32:33.299173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.299187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.328725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.328767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.173 [2024-11-19 07:32:33.328778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.328786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.328842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.328852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.173 [2024-11-19 07:32:33.328859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.328866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.328900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.328908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.173 [2024-11-19 07:32:33.328916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.328923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.329006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.329016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.173 [2024-11-19 07:32:33.329024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.329031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.329060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.329069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.173 [2024-11-19 07:32:33.329075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.329082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.329118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.329131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.173 [2024-11-19 07:32:33.329139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.329146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.329212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.173 [2024-11-19 07:32:33.329226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.173 [2024-11-19 07:32:33.329233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.173 [2024-11-19 07:32:33.329240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.173 [2024-11-19 07:32:33.329372] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.998 ms, result 0 00:17:25.108 00:17:25.108 00:17:25.108 07:32:34 -- ftl/trim.sh@93 -- # svcpid=72642 00:17:25.108 07:32:34 -- ftl/trim.sh@94 -- # waitforlisten 72642 00:17:25.108 07:32:34 -- common/autotest_common.sh@829 -- # '[' -z 72642 ']' 00:17:25.108 07:32:34 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:25.108 07:32:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.108 07:32:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.108 07:32:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.108 07:32:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.108 07:32:34 -- common/autotest_common.sh@10 -- # set +x 00:17:25.108 [2024-11-19 07:32:34.224824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:25.108 [2024-11-19 07:32:34.224938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72642 ] 00:17:25.367 [2024-11-19 07:32:34.372009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.367 [2024-11-19 07:32:34.547342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.367 [2024-11-19 07:32:34.547554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.769 07:32:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.769 07:32:35 -- common/autotest_common.sh@862 -- # return 0 00:17:26.769 07:32:35 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:26.769 [2024-11-19 07:32:35.900436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.769 [2024-11-19 07:32:35.900499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.028 [2024-11-19 07:32:36.059592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.028 [2024-11-19 07:32:36.059635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.028 [2024-11-19 07:32:36.059648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.028 [2024-11-19 07:32:36.059655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.028 [2024-11-19 07:32:36.061686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.028 [2024-11-19 07:32:36.061717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.028 [2024-11-19 07:32:36.061727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:17:27.028 [2024-11-19 07:32:36.061733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.028 [2024-11-19 07:32:36.061792] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.028 [2024-11-19 07:32:36.062340] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.028 [2024-11-19 07:32:36.062359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.028 [2024-11-19 07:32:36.062366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.028 [2024-11-19 07:32:36.062374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:17:27.028 [2024-11-19 07:32:36.062380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.028 [2024-11-19 07:32:36.063390] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.028 [2024-11-19 07:32:36.073148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.028 [2024-11-19 07:32:36.073192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.028 [2024-11-19 07:32:36.073202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.764 ms 00:17:27.029 [2024-11-19 07:32:36.073210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.073266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.073276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.029 [2024-11-19 07:32:36.073284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:27.029 [2024-11-19 07:32:36.073291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.077508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.077538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.029 [2024-11-19 07:32:36.077546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:17:27.029 [2024-11-19 07:32:36.077553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.077632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.077648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.029 [2024-11-19 07:32:36.077658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:27.029 [2024-11-19 07:32:36.077666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.077693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.077704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.029 [2024-11-19 07:32:36.077713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:27.029 [2024-11-19 07:32:36.077727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.077749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:27.029 [2024-11-19 07:32:36.080474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.080498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.029 [2024-11-19 07:32:36.080507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:17:27.029 [2024-11-19 07:32:36.080514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.080546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.080553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.029 [2024-11-19 07:32:36.080561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:27.029 [2024-11-19 07:32:36.080569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.080587] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.029 [2024-11-19 07:32:36.080600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:27.029 [2024-11-19 07:32:36.080627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.029 [2024-11-19 07:32:36.080639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:27.029 [2024-11-19 07:32:36.080696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:27.029 [2024-11-19 07:32:36.080704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.029 [2024-11-19 07:32:36.080717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:27.029 [2024-11-19 07:32:36.080725] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.029 [2024-11-19 07:32:36.080734] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.029 [2024-11-19 07:32:36.080741] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:27.029 [2024-11-19 07:32:36.080748] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.029 [2024-11-19 07:32:36.080754] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:27.029 [2024-11-19 07:32:36.080763] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:27.029 [2024-11-19 07:32:36.080769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.080776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.029 [2024-11-19 07:32:36.080782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:27.029 [2024-11-19 07:32:36.080790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.080839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.029 [2024-11-19 07:32:36.080847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.029 [2024-11-19 07:32:36.080853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:27.029 [2024-11-19 07:32:36.080861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.029 [2024-11-19 07:32:36.080919] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.029 [2024-11-19 07:32:36.080927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.029 [2024-11-19 07:32:36.080934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.029 [2024-11-19 07:32:36.080941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.029 [2024-11-19 07:32:36.080948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.029 [2024-11-19 07:32:36.080954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.029 [2024-11-19 07:32:36.080960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:27.029 [2024-11-19 07:32:36.080970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.029 [2024-11-19 07:32:36.080976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:27.029 [2024-11-19 07:32:36.080983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.029 [2024-11-19 07:32:36.080988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.029 [2024-11-19 07:32:36.080996] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:27.029 [2024-11-19 07:32:36.081002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.029 [2024-11-19 07:32:36.081009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.029 [2024-11-19 07:32:36.081015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:27.029 [2024-11-19 07:32:36.081021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.029 [2024-11-19 07:32:36.081034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:27.029 [2024-11-19 07:32:36.081039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:27.029 [2024-11-19 07:32:36.081052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:27.029 [2024-11-19 07:32:36.081059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.029 [2024-11-19 07:32:36.081073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.029 [2024-11-19 07:32:36.081095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.029 [2024-11-19 07:32:36.081114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.029 [2024-11-19 07:32:36.081132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.029 [2024-11-19 07:32:36.081151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.029 [2024-11-19 07:32:36.081163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.029 [2024-11-19 07:32:36.081168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:27.029 [2024-11-19 07:32:36.081176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.029 [2024-11-19 07:32:36.081200] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.029 [2024-11-19 07:32:36.081210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.029 [2024-11-19 07:32:36.081216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.029 [2024-11-19 07:32:36.081230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.029 [2024-11-19 07:32:36.081238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.029 [2024-11-19 07:32:36.081244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.029 [2024-11-19 07:32:36.081251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.029 [2024-11-19 07:32:36.081256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.029 [2024-11-19 07:32:36.081264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.029 [2024-11-19 07:32:36.081270] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.029 [2024-11-19 07:32:36.081279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.029 [2024-11-19 07:32:36.081292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:27.029 [2024-11-19 07:32:36.081299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:27.029 [2024-11-19 07:32:36.081306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:27.030 [2024-11-19 07:32:36.081316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:27.030 [2024-11-19 07:32:36.081322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:27.030 [2024-11-19 07:32:36.081329] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:27.030 [2024-11-19 07:32:36.081335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:27.030 [2024-11-19 07:32:36.081342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:27.030 [2024-11-19 07:32:36.081348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:27.030 [2024-11-19 07:32:36.081355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:27.030 [2024-11-19 07:32:36.081361] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:27.030 [2024-11-19 07:32:36.081368] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:27.030 [2024-11-19 07:32:36.081375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:27.030 [2024-11-19 07:32:36.081382] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.030 [2024-11-19 07:32:36.081388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.030 [2024-11-19 07:32:36.081396] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.030 [2024-11-19 07:32:36.081402] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.030 [2024-11-19 07:32:36.081409] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.030 [2024-11-19 07:32:36.081415] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.030 [2024-11-19 07:32:36.081424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.081430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.030 [2024-11-19 07:32:36.081437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:17:27.030 [2024-11-19 07:32:36.081443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.093421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.093451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.030 [2024-11-19 07:32:36.093462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.938 ms 00:17:27.030 [2024-11-19 07:32:36.093470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.093561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.093569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.030 [2024-11-19 07:32:36.093576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:27.030 [2024-11-19 07:32:36.093582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.117453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.117482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.030 [2024-11-19 07:32:36.117491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.853 ms 00:17:27.030 [2024-11-19 07:32:36.117499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.117543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.117552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.030 [2024-11-19 07:32:36.117561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:27.030 [2024-11-19 07:32:36.117568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.117855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.117866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.030 [2024-11-19 07:32:36.117876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:27.030 [2024-11-19 07:32:36.117882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.117972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.117987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.030 [2024-11-19 07:32:36.117997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:27.030 [2024-11-19 07:32:36.118003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.129722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.129748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.030 [2024-11-19 07:32:36.129759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.702 ms 00:17:27.030 [2024-11-19 07:32:36.129765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.139509] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:27.030 [2024-11-19 07:32:36.139547] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.030 [2024-11-19 07:32:36.139558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.139565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.030 [2024-11-19 07:32:36.139574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.716 ms 00:17:27.030 [2024-11-19 07:32:36.139581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.158130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.158159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:27.030 [2024-11-19 07:32:36.158169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.503 ms 00:17:27.030 [2024-11-19 07:32:36.158176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.167128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.167161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:27.030 [2024-11-19 07:32:36.167170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.895 ms 00:17:27.030 [2024-11-19 07:32:36.167176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.175914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.175940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:27.030 [2024-11-19 07:32:36.175951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.690 ms 00:17:27.030 [2024-11-19 07:32:36.175956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.176230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.176239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:27.030 [2024-11-19 07:32:36.176250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:17:27.030 [2024-11-19 07:32:36.176256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.221924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.221963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:27.030 [2024-11-19 07:32:36.221978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.648 ms 00:17:27.030 [2024-11-19 07:32:36.221985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.229870] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:27.030 [2024-11-19 07:32:36.241554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.241590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:27.030 [2024-11-19 07:32:36.241599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.502 ms 00:17:27.030 [2024-11-19 07:32:36.241608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.241661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.241672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:27.030 [2024-11-19 07:32:36.241679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.030 [2024-11-19 07:32:36.241690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.241728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.241737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:27.030 [2024-11-19 07:32:36.241744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:27.030 [2024-11-19 07:32:36.241752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.242688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.242711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:27.030 [2024-11-19 07:32:36.242718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:17:27.030 [2024-11-19 07:32:36.242725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.242751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.242761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:27.030 [2024-11-19 07:32:36.242767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.030 [2024-11-19 07:32:36.242774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.242802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:27.030 [2024-11-19 07:32:36.242812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.242819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:27.030 [2024-11-19 07:32:36.242826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:27.030 [2024-11-19 07:32:36.242832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.260828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.030 [2024-11-19 07:32:36.260858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:27.030 [2024-11-19 07:32:36.260868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:17:27.030 [2024-11-19 07:32:36.260874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.030 [2024-11-19 07:32:36.260942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.031 [2024-11-19 07:32:36.260950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:27.031 [2024-11-19 07:32:36.260959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:27.031 [2024-11-19 07:32:36.260967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.031 [2024-11-19 07:32:36.261666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.031 [2024-11-19 07:32:36.264095] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.859 ms, result 0 00:17:27.031 [2024-11-19 07:32:36.264993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:27.289 Some configs were skipped because the RPC state that can call them passed over. 00:17:27.289 07:32:36 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:27.289 [2024-11-19 07:32:36.483036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.289 [2024-11-19 07:32:36.483080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:27.289 [2024-11-19 07:32:36.483090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.048 ms 00:17:27.289 [2024-11-19 07:32:36.483098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.289 [2024-11-19 07:32:36.483128] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.142 ms, result 0 00:17:27.289 true 00:17:27.289 07:32:36 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:27.547 [2024-11-19 07:32:36.689854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.547 [2024-11-19 07:32:36.689895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:27.547 [2024-11-19 07:32:36.689907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.509 ms 00:17:27.547 [2024-11-19 07:32:36.689913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.547 [2024-11-19 07:32:36.689943] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.599 ms, result 0 00:17:27.547 true 00:17:27.547 07:32:36 -- ftl/trim.sh@102 -- # killprocess 72642 00:17:27.547 07:32:36 -- common/autotest_common.sh@936 -- # '[' -z 72642 ']' 00:17:27.547 07:32:36 -- common/autotest_common.sh@940 -- # kill -0 72642 00:17:27.547 07:32:36 -- common/autotest_common.sh@941 -- # uname 00:17:27.547 07:32:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.547 07:32:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72642 00:17:27.547 killing process with pid 72642 00:17:27.547 07:32:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.547 07:32:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.547 07:32:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72642' 00:17:27.547 07:32:36 -- common/autotest_common.sh@955 -- # kill 72642 00:17:27.547 07:32:36 -- common/autotest_common.sh@960 -- # wait 72642 00:17:28.115 [2024-11-19 07:32:37.265471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.265523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.115 [2024-11-19 07:32:37.265534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.115 [2024-11-19 07:32:37.265544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.265560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.115 [2024-11-19 07:32:37.267548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.267573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.115 [2024-11-19 07:32:37.267585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:17:28.115 [2024-11-19 07:32:37.267592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.267799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.267808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.115 [2024-11-19 07:32:37.267816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:28.115 [2024-11-19 07:32:37.267822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.270884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.270913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.115 [2024-11-19 07:32:37.270922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:17:28.115 [2024-11-19 07:32:37.270928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.276234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.276267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:28.115 [2024-11-19 07:32:37.276276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:17:28.115 [2024-11-19 07:32:37.276284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.283640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.283668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.115 [2024-11-19 07:32:37.283678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.315 ms 00:17:28.115 [2024-11-19 07:32:37.283684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.290289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.290318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.115 [2024-11-19 07:32:37.290328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:17:28.115 [2024-11-19 07:32:37.290334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.290438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.290446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.115 [2024-11-19 07:32:37.290454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:28.115 [2024-11-19 07:32:37.290460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.298552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.298578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:28.115 [2024-11-19 07:32:37.298587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.075 ms 00:17:28.115 [2024-11-19 07:32:37.298593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.306039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.306067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:28.115 [2024-11-19 07:32:37.306079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.418 ms 00:17:28.115 [2024-11-19 07:32:37.306085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.313400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.313425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.115 [2024-11-19 07:32:37.313433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.285 ms 00:17:28.115 [2024-11-19 07:32:37.313439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.320359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.115 [2024-11-19 07:32:37.320386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.115 [2024-11-19 07:32:37.320395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.873 ms 00:17:28.115 [2024-11-19 07:32:37.320400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.115 [2024-11-19 07:32:37.320433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.115 [2024-11-19 07:32:37.320445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.115 [2024-11-19 07:32:37.320586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.320998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.116 [2024-11-19 07:32:37.321162] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.116 [2024-11-19 07:32:37.321171] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:28.116 [2024-11-19 07:32:37.321177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.116 [2024-11-19 07:32:37.321198] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.116 [2024-11-19 07:32:37.321205] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.116 [2024-11-19 07:32:37.321213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.116 [2024-11-19 07:32:37.321219] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.116 [2024-11-19 07:32:37.321227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.116 [2024-11-19 07:32:37.321233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.116 [2024-11-19 07:32:37.321239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.116 [2024-11-19 07:32:37.321245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.117 [2024-11-19 07:32:37.321252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.117 [2024-11-19 07:32:37.321258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.117 [2024-11-19 07:32:37.321266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:17:28.117 [2024-11-19 07:32:37.321273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.330994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.117 [2024-11-19 07:32:37.331020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.117 [2024-11-19 07:32:37.331031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.704 ms 00:17:28.117 [2024-11-19 07:32:37.331037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.331231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.117 [2024-11-19 07:32:37.331240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.117 [2024-11-19 07:32:37.331250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:17:28.117 [2024-11-19 07:32:37.331256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.365843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.117 [2024-11-19 07:32:37.365871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.117 [2024-11-19 07:32:37.365881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.117 [2024-11-19 07:32:37.365887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.365951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.117 [2024-11-19 07:32:37.365958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.117 [2024-11-19 07:32:37.365967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.117 [2024-11-19 07:32:37.365973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.366005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.117 [2024-11-19 07:32:37.366013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.117 [2024-11-19 07:32:37.366022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.117 [2024-11-19 07:32:37.366029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.117 [2024-11-19 07:32:37.366044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.117 [2024-11-19 07:32:37.366051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.117 [2024-11-19 07:32:37.366059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.117 [2024-11-19 07:32:37.366066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.426062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.426102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.375 [2024-11-19 07:32:37.426112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.426120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.375 [2024-11-19 07:32:37.448046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.375 [2024-11-19 07:32:37.448113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.375 [2024-11-19 07:32:37.448157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.375 [2024-11-19 07:32:37.448265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.375 [2024-11-19 07:32:37.448311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.375 [2024-11-19 07:32:37.448363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.375 [2024-11-19 07:32:37.448411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.375 [2024-11-19 07:32:37.448418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.375 [2024-11-19 07:32:37.448424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.375 [2024-11-19 07:32:37.448527] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 183.039 ms, result 0 00:17:28.941 07:32:38 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.941 [2024-11-19 07:32:38.151034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.941 [2024-11-19 07:32:38.151150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72698 ] 00:17:29.200 [2024-11-19 07:32:38.298626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.200 [2024-11-19 07:32:38.446722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.458 [2024-11-19 07:32:38.651249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.458 [2024-11-19 07:32:38.651297] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.718 [2024-11-19 07:32:38.801538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.801584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.718 [2024-11-19 07:32:38.801596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.718 [2024-11-19 07:32:38.801603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.804176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.804222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.718 [2024-11-19 07:32:38.804232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:17:29.718 [2024-11-19 07:32:38.804239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.804306] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.718 [2024-11-19 07:32:38.805008] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.718 [2024-11-19 07:32:38.805032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.805040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.718 [2024-11-19 07:32:38.805049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:17:29.718 [2024-11-19 07:32:38.805055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.806113] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.718 [2024-11-19 07:32:38.818554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.818586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.718 [2024-11-19 07:32:38.818596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.442 ms 00:17:29.718 [2024-11-19 07:32:38.818604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.818678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.818688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.718 [2024-11-19 07:32:38.818696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:29.718 [2024-11-19 07:32:38.818703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.823478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.823515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.718 [2024-11-19 07:32:38.823525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:17:29.718 [2024-11-19 07:32:38.823539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.823644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.823655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.718 [2024-11-19 07:32:38.823670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:29.718 [2024-11-19 07:32:38.823678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.823705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.823717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.718 [2024-11-19 07:32:38.823725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:29.718 [2024-11-19 07:32:38.823732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.823759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.718 [2024-11-19 07:32:38.827170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.827205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.718 [2024-11-19 07:32:38.827213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:17:29.718 [2024-11-19 07:32:38.827223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.827259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.827269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.718 [2024-11-19 07:32:38.827276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:29.718 [2024-11-19 07:32:38.827283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.827300] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.718 [2024-11-19 07:32:38.827316] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:29.718 [2024-11-19 07:32:38.827349] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.718 [2024-11-19 07:32:38.827366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:29.718 [2024-11-19 07:32:38.827437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:29.718 [2024-11-19 07:32:38.827447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.718 [2024-11-19 07:32:38.827457] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:29.718 [2024-11-19 07:32:38.827467] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.718 [2024-11-19 07:32:38.827475] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.718 [2024-11-19 07:32:38.827483] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.718 [2024-11-19 07:32:38.827490] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.718 [2024-11-19 07:32:38.827497] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:29.718 [2024-11-19 07:32:38.827507] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:29.718 [2024-11-19 07:32:38.827514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.827522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.718 [2024-11-19 07:32:38.827530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:29.718 [2024-11-19 07:32:38.827536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.827600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.718 [2024-11-19 07:32:38.827609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.718 [2024-11-19 07:32:38.827617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:29.718 [2024-11-19 07:32:38.827623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.718 [2024-11-19 07:32:38.827708] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.718 [2024-11-19 07:32:38.827727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.718 [2024-11-19 07:32:38.827735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.718 [2024-11-19 07:32:38.827743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.718 [2024-11-19 07:32:38.827750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.718 [2024-11-19 07:32:38.827757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.718 [2024-11-19 07:32:38.827763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.718 [2024-11-19 07:32:38.827770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.718 [2024-11-19 07:32:38.827776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.718 [2024-11-19 07:32:38.827784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.718 [2024-11-19 07:32:38.827790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.718 [2024-11-19 07:32:38.827797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.718 [2024-11-19 07:32:38.827803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.718 [2024-11-19 07:32:38.827812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.718 [2024-11-19 07:32:38.827824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:29.718 [2024-11-19 07:32:38.827830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.718 [2024-11-19 07:32:38.827837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.718 [2024-11-19 07:32:38.827843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:29.718 [2024-11-19 07:32:38.827849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.718 [2024-11-19 07:32:38.827855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:29.718 [2024-11-19 07:32:38.827862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:29.718 [2024-11-19 07:32:38.827868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:29.718 [2024-11-19 07:32:38.827875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.718 [2024-11-19 07:32:38.827881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.719 [2024-11-19 07:32:38.827887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:29.719 [2024-11-19 07:32:38.827893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.719 [2024-11-19 07:32:38.827899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:29.719 [2024-11-19 07:32:38.827905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:29.719 [2024-11-19 07:32:38.827911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.719 [2024-11-19 07:32:38.827918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.719 [2024-11-19 07:32:38.827923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:29.719 [2024-11-19 07:32:38.827929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.719 [2024-11-19 07:32:38.827936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:29.719 [2024-11-19 07:32:38.827943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:29.719 [2024-11-19 07:32:38.827949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.719 [2024-11-19 07:32:38.827955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.719 [2024-11-19 07:32:38.827961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.719 [2024-11-19 07:32:38.827968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.719 [2024-11-19 07:32:38.827974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:29.719 [2024-11-19 07:32:38.827980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.719 [2024-11-19 07:32:38.827986] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.719 [2024-11-19 07:32:38.827993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.719 [2024-11-19 07:32:38.828000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.719 [2024-11-19 07:32:38.828010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.719 [2024-11-19 07:32:38.828017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.719 [2024-11-19 07:32:38.828024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.719 [2024-11-19 07:32:38.828030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.719 [2024-11-19 07:32:38.828036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.719 [2024-11-19 07:32:38.828042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.719 [2024-11-19 07:32:38.828049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.719 [2024-11-19 07:32:38.828056] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.719 [2024-11-19 07:32:38.828065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.719 [2024-11-19 07:32:38.828073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.719 [2024-11-19 07:32:38.828080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:29.719 [2024-11-19 07:32:38.828087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.719 [2024-11-19 07:32:38.828094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:29.719 [2024-11-19 07:32:38.828100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:29.719 [2024-11-19 07:32:38.828107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:29.719 [2024-11-19 07:32:38.828113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:29.719 [2024-11-19 07:32:38.828120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:29.719 [2024-11-19 07:32:38.828127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:29.719 [2024-11-19 07:32:38.828134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:29.719 [2024-11-19 07:32:38.828141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:29.719 [2024-11-19 07:32:38.828147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:29.719 [2024-11-19 07:32:38.828155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:29.719 [2024-11-19 07:32:38.828161] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.719 [2024-11-19 07:32:38.828173] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.719 [2024-11-19 07:32:38.828200] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.719 [2024-11-19 07:32:38.828207] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.719 [2024-11-19 07:32:38.828214] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.719 [2024-11-19 07:32:38.828221] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.719 [2024-11-19 07:32:38.828228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.828235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.719 [2024-11-19 07:32:38.828242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:17:29.719 [2024-11-19 07:32:38.828249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.842815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.842851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.719 [2024-11-19 07:32:38.842862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.524 ms 00:17:29.719 [2024-11-19 07:32:38.842869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.842979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.842989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.719 [2024-11-19 07:32:38.842996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:29.719 [2024-11-19 07:32:38.843004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.883398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.883436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.719 [2024-11-19 07:32:38.883448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.374 ms 00:17:29.719 [2024-11-19 07:32:38.883456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.883523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.883533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.719 [2024-11-19 07:32:38.883545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.719 [2024-11-19 07:32:38.883553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.883861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.883884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.719 [2024-11-19 07:32:38.883893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:29.719 [2024-11-19 07:32:38.883900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.884015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.884025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.719 [2024-11-19 07:32:38.884033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:29.719 [2024-11-19 07:32:38.884040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.897889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.897919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.719 [2024-11-19 07:32:38.897928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.828 ms 00:17:29.719 [2024-11-19 07:32:38.897939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.910645] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:29.719 [2024-11-19 07:32:38.910678] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.719 [2024-11-19 07:32:38.910688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.910696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.719 [2024-11-19 07:32:38.910704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.659 ms 00:17:29.719 [2024-11-19 07:32:38.910711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.935058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.935095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.719 [2024-11-19 07:32:38.935105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.281 ms 00:17:29.719 [2024-11-19 07:32:38.935112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.947046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.947087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.719 [2024-11-19 07:32:38.947102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.886 ms 00:17:29.719 [2024-11-19 07:32:38.947109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.958782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.958811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.719 [2024-11-19 07:32:38.958821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.615 ms 00:17:29.719 [2024-11-19 07:32:38.958828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.719 [2024-11-19 07:32:38.959203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.719 [2024-11-19 07:32:38.959232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.719 [2024-11-19 07:32:38.959241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:17:29.720 [2024-11-19 07:32:38.959250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.017379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.017423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.978 [2024-11-19 07:32:39.017437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.106 ms 00:17:29.978 [2024-11-19 07:32:39.017450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.027945] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.978 [2024-11-19 07:32:39.041771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.041805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.978 [2024-11-19 07:32:39.041817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.240 ms 00:17:29.978 [2024-11-19 07:32:39.041825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.041892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.041902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.978 [2024-11-19 07:32:39.041914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.978 [2024-11-19 07:32:39.041921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.041966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.041975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.978 [2024-11-19 07:32:39.041983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:29.978 [2024-11-19 07:32:39.041990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.043148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.043193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:29.978 [2024-11-19 07:32:39.043202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:17:29.978 [2024-11-19 07:32:39.043209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.043239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.043251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.978 [2024-11-19 07:32:39.043259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.978 [2024-11-19 07:32:39.043265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.043295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.978 [2024-11-19 07:32:39.043305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.043312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.978 [2024-11-19 07:32:39.043320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:29.978 [2024-11-19 07:32:39.043327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.067117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.067149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.978 [2024-11-19 07:32:39.067159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.769 ms 00:17:29.978 [2024-11-19 07:32:39.067166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.067252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.978 [2024-11-19 07:32:39.067263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.978 [2024-11-19 07:32:39.067271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:29.978 [2024-11-19 07:32:39.067277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.978 [2024-11-19 07:32:39.068321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.978 [2024-11-19 07:32:39.071388] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.492 ms, result 0 00:17:29.978 [2024-11-19 07:32:39.072730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.978 [2024-11-19 07:32:39.085974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.912  [2024-11-19T07:32:41.538Z] Copying: 16/256 [MB] (16 MBps) [2024-11-19T07:32:42.472Z] Copying: 39/256 [MB] (23 MBps) [2024-11-19T07:32:43.405Z] Copying: 57/256 [MB] (17 MBps) [2024-11-19T07:32:44.339Z] Copying: 69/256 [MB] (12 MBps) [2024-11-19T07:32:45.271Z] Copying: 85/256 [MB] (15 MBps) [2024-11-19T07:32:46.205Z] Copying: 106/256 [MB] (21 MBps) [2024-11-19T07:32:47.139Z] Copying: 129/256 [MB] (23 MBps) [2024-11-19T07:32:48.510Z] Copying: 149/256 [MB] (19 MBps) [2024-11-19T07:32:49.444Z] Copying: 170/256 [MB] (20 MBps) [2024-11-19T07:32:50.436Z] Copying: 188/256 [MB] (17 MBps) [2024-11-19T07:32:51.369Z] Copying: 202/256 [MB] (14 MBps) [2024-11-19T07:32:52.301Z] Copying: 220/256 [MB] (18 MBps) [2024-11-19T07:32:52.865Z] Copying: 244/256 [MB] (23 MBps) [2024-11-19T07:32:53.123Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-19 07:32:53.025807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.873 [2024-11-19 07:32:53.041230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.873 [2024-11-19 07:32:53.041282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.873 [2024-11-19 07:32:53.041294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.873 [2024-11-19 07:32:53.041303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.873 [2024-11-19 07:32:53.041326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.873 [2024-11-19 07:32:53.043816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.873 [2024-11-19 07:32:53.043844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.874 [2024-11-19 07:32:53.043854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:17:43.874 [2024-11-19 07:32:53.043862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.044136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.044147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.874 [2024-11-19 07:32:53.044155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:43.874 [2024-11-19 07:32:53.044165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.047865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.047886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.874 [2024-11-19 07:32:53.047895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:17:43.874 [2024-11-19 07:32:53.047903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.056005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.056033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:43.874 [2024-11-19 07:32:53.056044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.074 ms 00:17:43.874 [2024-11-19 07:32:53.056052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.080258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.080291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.874 [2024-11-19 07:32:53.080302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:17:43.874 [2024-11-19 07:32:53.080309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.094156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.094197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.874 [2024-11-19 07:32:53.094208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:17:43.874 [2024-11-19 07:32:53.094215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.094346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.094356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.874 [2024-11-19 07:32:53.094364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:43.874 [2024-11-19 07:32:53.094371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.874 [2024-11-19 07:32:53.120453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.874 [2024-11-19 07:32:53.120493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:43.874 [2024-11-19 07:32:53.120505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.064 ms 00:17:43.874 [2024-11-19 07:32:53.120512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.133 [2024-11-19 07:32:53.143936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.133 [2024-11-19 07:32:53.143970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.133 [2024-11-19 07:32:53.143981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.375 ms 00:17:44.133 [2024-11-19 07:32:53.143988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.133 [2024-11-19 07:32:53.167095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.133 [2024-11-19 07:32:53.167126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.133 [2024-11-19 07:32:53.167136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.062 ms 00:17:44.133 [2024-11-19 07:32:53.167143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.133 [2024-11-19 07:32:53.190238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.133 [2024-11-19 07:32:53.190269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.133 [2024-11-19 07:32:53.190280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.015 ms 00:17:44.133 [2024-11-19 07:32:53.190287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.134 [2024-11-19 07:32:53.190328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.134 [2024-11-19 07:32:53.190343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.134 [2024-11-19 07:32:53.190960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.190967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.190974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.190982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.190989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.190996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.135 [2024-11-19 07:32:53.191087] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.135 [2024-11-19 07:32:53.191094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a4382e0-c346-4fea-b0fa-834ba9590bee 00:17:44.135 [2024-11-19 07:32:53.191102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.135 [2024-11-19 07:32:53.191109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.135 [2024-11-19 07:32:53.191115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.135 [2024-11-19 07:32:53.191123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.135 [2024-11-19 07:32:53.191130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.135 [2024-11-19 07:32:53.191139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.135 [2024-11-19 07:32:53.191146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.135 [2024-11-19 07:32:53.191152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.135 [2024-11-19 07:32:53.191158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.135 [2024-11-19 07:32:53.191165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.135 [2024-11-19 07:32:53.191172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.135 [2024-11-19 07:32:53.191190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:17:44.135 [2024-11-19 07:32:53.191198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.203605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.135 [2024-11-19 07:32:53.203634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.135 [2024-11-19 07:32:53.203649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:17:44.135 [2024-11-19 07:32:53.203658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.203853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.135 [2024-11-19 07:32:53.203862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.135 [2024-11-19 07:32:53.203870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:44.135 [2024-11-19 07:32:53.203876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.241401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.241433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.135 [2024-11-19 07:32:53.241447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.241454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.241525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.241534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.135 [2024-11-19 07:32:53.241542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.241549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.241586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.241595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.135 [2024-11-19 07:32:53.241602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.241612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.241630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.241637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.135 [2024-11-19 07:32:53.241644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.241651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.314028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.314065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.135 [2024-11-19 07:32:53.314078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.314085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.342986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.135 [2024-11-19 07:32:53.343030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.135 [2024-11-19 07:32:53.343096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.135 [2024-11-19 07:32:53.343151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.135 [2024-11-19 07:32:53.343276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.135 [2024-11-19 07:32:53.343330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.135 [2024-11-19 07:32:53.343388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.135 [2024-11-19 07:32:53.343457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.135 [2024-11-19 07:32:53.343465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.135 [2024-11-19 07:32:53.343473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.135 [2024-11-19 07:32:53.343603] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.398 ms, result 0 00:17:45.069 00:17:45.069 00:17:45.069 07:32:54 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.635 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:45.635 07:32:54 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:45.635 07:32:54 -- ftl/trim.sh@109 -- # fio_kill 00:17:45.635 07:32:54 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.635 07:32:54 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.635 07:32:54 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:45.635 07:32:54 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:45.635 Process with pid 72642 is not found 00:17:45.635 07:32:54 -- ftl/trim.sh@20 -- # killprocess 72642 00:17:45.635 07:32:54 -- common/autotest_common.sh@936 -- # '[' -z 72642 ']' 00:17:45.635 07:32:54 -- common/autotest_common.sh@940 -- # kill -0 72642 00:17:45.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72642) - No such process 00:17:45.635 07:32:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72642 is not found' 00:17:45.635 ************************************ 00:17:45.635 END TEST ftl_trim 00:17:45.635 ************************************ 00:17:45.635 00:17:45.635 real 1m8.326s 00:17:45.635 user 1m29.811s 00:17:45.635 sys 0m5.090s 00:17:45.635 07:32:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:45.635 07:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:45.635 07:32:54 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:45.635 07:32:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:45.635 07:32:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.635 07:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:45.635 ************************************ 00:17:45.635 START TEST ftl_restore 00:17:45.635 ************************************ 00:17:45.635 07:32:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:45.895 * Looking for test storage... 00:17:45.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.895 07:32:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:45.895 07:32:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:45.895 07:32:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:45.895 07:32:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:45.895 07:32:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:45.895 07:32:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:45.895 07:32:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:45.895 07:32:54 -- scripts/common.sh@335 -- # IFS=.-: 00:17:45.895 07:32:54 -- scripts/common.sh@335 -- # read -ra ver1 00:17:45.895 07:32:54 -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.896 07:32:54 -- scripts/common.sh@336 -- # read -ra ver2 00:17:45.896 07:32:54 -- scripts/common.sh@337 -- # local 'op=<' 00:17:45.896 07:32:54 -- scripts/common.sh@339 -- # ver1_l=2 00:17:45.896 07:32:54 -- scripts/common.sh@340 -- # ver2_l=1 00:17:45.896 07:32:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:45.896 07:32:54 -- scripts/common.sh@343 -- # case "$op" in 00:17:45.896 07:32:54 -- scripts/common.sh@344 -- # : 1 00:17:45.896 07:32:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:45.896 07:32:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.896 07:32:54 -- scripts/common.sh@364 -- # decimal 1 00:17:45.896 07:32:54 -- scripts/common.sh@352 -- # local d=1 00:17:45.896 07:32:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.896 07:32:54 -- scripts/common.sh@354 -- # echo 1 00:17:45.896 07:32:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:45.896 07:32:54 -- scripts/common.sh@365 -- # decimal 2 00:17:45.896 07:32:54 -- scripts/common.sh@352 -- # local d=2 00:17:45.896 07:32:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.896 07:32:54 -- scripts/common.sh@354 -- # echo 2 00:17:45.896 07:32:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:45.896 07:32:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:45.896 07:32:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:45.896 07:32:54 -- scripts/common.sh@367 -- # return 0 00:17:45.896 07:32:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.896 07:32:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:45.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.896 --rc genhtml_branch_coverage=1 00:17:45.896 --rc genhtml_function_coverage=1 00:17:45.896 --rc genhtml_legend=1 00:17:45.896 --rc geninfo_all_blocks=1 00:17:45.896 --rc geninfo_unexecuted_blocks=1 00:17:45.896 00:17:45.896 ' 00:17:45.896 07:32:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:45.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.896 --rc genhtml_branch_coverage=1 00:17:45.896 --rc genhtml_function_coverage=1 00:17:45.896 --rc genhtml_legend=1 00:17:45.896 --rc geninfo_all_blocks=1 00:17:45.896 --rc geninfo_unexecuted_blocks=1 00:17:45.896 00:17:45.896 ' 00:17:45.896 07:32:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:45.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.896 --rc genhtml_branch_coverage=1 00:17:45.896 --rc genhtml_function_coverage=1 00:17:45.896 --rc genhtml_legend=1 00:17:45.896 --rc geninfo_all_blocks=1 00:17:45.896 --rc geninfo_unexecuted_blocks=1 00:17:45.896 00:17:45.896 ' 00:17:45.896 07:32:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:45.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.896 --rc genhtml_branch_coverage=1 00:17:45.896 --rc genhtml_function_coverage=1 00:17:45.896 --rc genhtml_legend=1 00:17:45.896 --rc geninfo_all_blocks=1 00:17:45.896 --rc geninfo_unexecuted_blocks=1 00:17:45.896 00:17:45.896 ' 00:17:45.896 07:32:54 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.896 07:32:54 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:45.896 07:32:54 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.896 07:32:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.896 07:32:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.896 07:32:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.896 07:32:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.896 07:32:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.896 07:32:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.896 07:32:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.896 07:32:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.896 07:32:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.896 07:32:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.896 07:32:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.896 07:32:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.896 07:32:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.896 07:32:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.896 07:32:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.896 07:32:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.896 07:32:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.896 07:32:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.896 07:32:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.896 07:32:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.896 07:32:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.896 07:32:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.896 07:32:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.896 07:32:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.896 07:32:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.896 07:32:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.896 07:32:55 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.896 07:32:55 -- ftl/restore.sh@13 -- # mktemp -d 00:17:45.896 07:32:55 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FKy2rIUTbV 00:17:45.896 07:32:55 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.896 07:32:55 -- ftl/restore.sh@16 -- # case $opt in 00:17:45.896 07:32:55 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:45.896 07:32:55 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.896 07:32:55 -- ftl/restore.sh@23 -- # shift 2 00:17:45.896 07:32:55 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:45.896 07:32:55 -- ftl/restore.sh@25 -- # timeout=240 00:17:45.896 07:32:55 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:45.896 07:32:55 -- ftl/restore.sh@39 -- # svcpid=72939 00:17:45.896 07:32:55 -- ftl/restore.sh@41 -- # waitforlisten 72939 00:17:45.896 07:32:55 -- common/autotest_common.sh@829 -- # '[' -z 72939 ']' 00:17:45.896 07:32:55 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.896 07:32:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.896 07:32:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.896 07:32:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.896 07:32:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.896 07:32:55 -- common/autotest_common.sh@10 -- # set +x 00:17:45.896 [2024-11-19 07:32:55.088213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.896 [2024-11-19 07:32:55.088493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:17:46.154 [2024-11-19 07:32:55.235076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.411 [2024-11-19 07:32:55.450718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.411 [2024-11-19 07:32:55.451033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.345 07:32:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.345 07:32:56 -- common/autotest_common.sh@862 -- # return 0 00:17:47.345 07:32:56 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:47.345 07:32:56 -- ftl/common.sh@54 -- # local name=nvme0 00:17:47.345 07:32:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:47.345 07:32:56 -- ftl/common.sh@56 -- # local size=103424 00:17:47.345 07:32:56 -- ftl/common.sh@59 -- # local base_bdev 00:17:47.345 07:32:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:47.602 07:32:56 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:47.602 07:32:56 -- ftl/common.sh@62 -- # local base_size 00:17:47.602 07:32:56 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:47.602 07:32:56 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:47.602 07:32:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:47.602 07:32:56 -- common/autotest_common.sh@1369 -- # local bs 00:17:47.602 07:32:56 -- common/autotest_common.sh@1370 -- # local nb 00:17:47.602 07:32:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.860 07:32:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:47.860 { 00:17:47.860 "name": "nvme0n1", 00:17:47.860 "aliases": [ 00:17:47.860 "6b45c217-992b-40f2-81c2-9fbaf5e5558e" 00:17:47.860 ], 00:17:47.860 "product_name": "NVMe disk", 00:17:47.860 "block_size": 4096, 00:17:47.860 "num_blocks": 1310720, 00:17:47.860 "uuid": "6b45c217-992b-40f2-81c2-9fbaf5e5558e", 00:17:47.860 "assigned_rate_limits": { 00:17:47.860 "rw_ios_per_sec": 0, 00:17:47.860 "rw_mbytes_per_sec": 0, 00:17:47.860 "r_mbytes_per_sec": 0, 00:17:47.860 "w_mbytes_per_sec": 0 00:17:47.860 }, 00:17:47.860 "claimed": true, 00:17:47.860 "claim_type": "read_many_write_one", 00:17:47.860 "zoned": false, 00:17:47.860 "supported_io_types": { 00:17:47.860 "read": true, 00:17:47.860 "write": true, 00:17:47.860 "unmap": true, 00:17:47.860 "write_zeroes": true, 00:17:47.860 "flush": true, 00:17:47.860 "reset": true, 00:17:47.860 "compare": true, 00:17:47.860 "compare_and_write": false, 00:17:47.860 "abort": true, 00:17:47.861 "nvme_admin": true, 00:17:47.861 "nvme_io": true 00:17:47.861 }, 00:17:47.861 "driver_specific": { 00:17:47.861 "nvme": [ 00:17:47.861 { 00:17:47.861 "pci_address": "0000:00:07.0", 00:17:47.861 "trid": { 00:17:47.861 "trtype": "PCIe", 00:17:47.861 "traddr": "0000:00:07.0" 00:17:47.861 }, 00:17:47.861 "ctrlr_data": { 00:17:47.861 "cntlid": 0, 00:17:47.861 "vendor_id": "0x1b36", 00:17:47.861 "model_number": "QEMU NVMe Ctrl", 00:17:47.861 "serial_number": "12341", 00:17:47.861 "firmware_revision": "8.0.0", 00:17:47.861 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.861 "oacs": { 00:17:47.861 "security": 0, 00:17:47.861 "format": 1, 00:17:47.861 "firmware": 0, 00:17:47.861 "ns_manage": 1 00:17:47.861 }, 00:17:47.861 "multi_ctrlr": false, 00:17:47.861 "ana_reporting": false 00:17:47.861 }, 00:17:47.861 "vs": { 00:17:47.861 "nvme_version": "1.4" 00:17:47.861 }, 00:17:47.861 "ns_data": { 00:17:47.861 "id": 1, 00:17:47.861 "can_share": false 00:17:47.861 } 00:17:47.861 } 00:17:47.861 ], 00:17:47.861 "mp_policy": "active_passive" 00:17:47.861 } 00:17:47.861 } 00:17:47.861 ]' 00:17:47.861 07:32:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:47.861 07:32:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:47.861 07:32:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:47.861 07:32:57 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:47.861 07:32:57 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:47.861 07:32:57 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:47.861 07:32:57 -- ftl/common.sh@63 -- # base_size=5120 00:17:47.861 07:32:57 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.861 07:32:57 -- ftl/common.sh@67 -- # clear_lvols 00:17:47.861 07:32:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.861 07:32:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:48.119 07:32:57 -- ftl/common.sh@28 -- # stores=5466d1a8-07da-460d-bfc3-52ec814bfb86 00:17:48.119 07:32:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:48.119 07:32:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5466d1a8-07da-460d-bfc3-52ec814bfb86 00:17:48.377 07:32:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:48.635 07:32:57 -- ftl/common.sh@68 -- # lvs=685f1ed7-f614-4605-a0e7-1d55726cacb3 00:17:48.635 07:32:57 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 685f1ed7-f614-4605-a0e7-1d55726cacb3 00:17:48.635 07:32:57 -- ftl/restore.sh@43 -- # split_bdev=98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.635 07:32:57 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:48.635 07:32:57 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.635 07:32:57 -- ftl/common.sh@35 -- # local name=nvc0 00:17:48.635 07:32:57 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:48.635 07:32:57 -- ftl/common.sh@37 -- # local base_bdev=98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.635 07:32:57 -- ftl/common.sh@38 -- # local cache_size= 00:17:48.635 07:32:57 -- ftl/common.sh@41 -- # get_bdev_size 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.635 07:32:57 -- common/autotest_common.sh@1367 -- # local bdev_name=98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.635 07:32:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:48.635 07:32:57 -- common/autotest_common.sh@1369 -- # local bs 00:17:48.635 07:32:57 -- common/autotest_common.sh@1370 -- # local nb 00:17:48.635 07:32:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:48.894 07:32:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:48.894 { 00:17:48.894 "name": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:48.894 "aliases": [ 00:17:48.894 "lvs/nvme0n1p0" 00:17:48.894 ], 00:17:48.894 "product_name": "Logical Volume", 00:17:48.894 "block_size": 4096, 00:17:48.894 "num_blocks": 26476544, 00:17:48.894 "uuid": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:48.894 "assigned_rate_limits": { 00:17:48.894 "rw_ios_per_sec": 0, 00:17:48.894 "rw_mbytes_per_sec": 0, 00:17:48.894 "r_mbytes_per_sec": 0, 00:17:48.894 "w_mbytes_per_sec": 0 00:17:48.894 }, 00:17:48.894 "claimed": false, 00:17:48.894 "zoned": false, 00:17:48.894 "supported_io_types": { 00:17:48.894 "read": true, 00:17:48.894 "write": true, 00:17:48.894 "unmap": true, 00:17:48.894 "write_zeroes": true, 00:17:48.894 "flush": false, 00:17:48.894 "reset": true, 00:17:48.894 "compare": false, 00:17:48.894 "compare_and_write": false, 00:17:48.894 "abort": false, 00:17:48.894 "nvme_admin": false, 00:17:48.894 "nvme_io": false 00:17:48.894 }, 00:17:48.894 "driver_specific": { 00:17:48.894 "lvol": { 00:17:48.894 "lvol_store_uuid": "685f1ed7-f614-4605-a0e7-1d55726cacb3", 00:17:48.894 "base_bdev": "nvme0n1", 00:17:48.894 "thin_provision": true, 00:17:48.894 "snapshot": false, 00:17:48.894 "clone": false, 00:17:48.894 "esnap_clone": false 00:17:48.894 } 00:17:48.894 } 00:17:48.894 } 00:17:48.894 ]' 00:17:48.894 07:32:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:48.894 07:32:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:48.894 07:32:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:48.894 07:32:58 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:48.894 07:32:58 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:48.894 07:32:58 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:48.894 07:32:58 -- ftl/common.sh@41 -- # local base_size=5171 00:17:48.894 07:32:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:48.894 07:32:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:49.152 07:32:58 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:49.152 07:32:58 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:49.152 07:32:58 -- ftl/common.sh@48 -- # get_bdev_size 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.152 07:32:58 -- common/autotest_common.sh@1367 -- # local bdev_name=98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.152 07:32:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:49.152 07:32:58 -- common/autotest_common.sh@1369 -- # local bs 00:17:49.152 07:32:58 -- common/autotest_common.sh@1370 -- # local nb 00:17:49.152 07:32:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.411 07:32:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:49.411 { 00:17:49.411 "name": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:49.411 "aliases": [ 00:17:49.411 "lvs/nvme0n1p0" 00:17:49.411 ], 00:17:49.411 "product_name": "Logical Volume", 00:17:49.411 "block_size": 4096, 00:17:49.411 "num_blocks": 26476544, 00:17:49.411 "uuid": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:49.411 "assigned_rate_limits": { 00:17:49.411 "rw_ios_per_sec": 0, 00:17:49.411 "rw_mbytes_per_sec": 0, 00:17:49.411 "r_mbytes_per_sec": 0, 00:17:49.411 "w_mbytes_per_sec": 0 00:17:49.411 }, 00:17:49.411 "claimed": false, 00:17:49.411 "zoned": false, 00:17:49.411 "supported_io_types": { 00:17:49.411 "read": true, 00:17:49.411 "write": true, 00:17:49.411 "unmap": true, 00:17:49.411 "write_zeroes": true, 00:17:49.411 "flush": false, 00:17:49.411 "reset": true, 00:17:49.411 "compare": false, 00:17:49.411 "compare_and_write": false, 00:17:49.411 "abort": false, 00:17:49.411 "nvme_admin": false, 00:17:49.411 "nvme_io": false 00:17:49.411 }, 00:17:49.411 "driver_specific": { 00:17:49.411 "lvol": { 00:17:49.411 "lvol_store_uuid": "685f1ed7-f614-4605-a0e7-1d55726cacb3", 00:17:49.411 "base_bdev": "nvme0n1", 00:17:49.411 "thin_provision": true, 00:17:49.411 "snapshot": false, 00:17:49.411 "clone": false, 00:17:49.411 "esnap_clone": false 00:17:49.411 } 00:17:49.411 } 00:17:49.411 } 00:17:49.411 ]' 00:17:49.411 07:32:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:49.411 07:32:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:49.411 07:32:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:49.411 07:32:58 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:49.411 07:32:58 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:49.411 07:32:58 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:49.411 07:32:58 -- ftl/common.sh@48 -- # cache_size=5171 00:17:49.411 07:32:58 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:49.669 07:32:58 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:49.669 07:32:58 -- ftl/restore.sh@48 -- # get_bdev_size 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.669 07:32:58 -- common/autotest_common.sh@1367 -- # local bdev_name=98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.669 07:32:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:49.669 07:32:58 -- common/autotest_common.sh@1369 -- # local bs 00:17:49.669 07:32:58 -- common/autotest_common.sh@1370 -- # local nb 00:17:49.669 07:32:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98122e08-4357-4c88-a59f-ea3130d17e83 00:17:49.927 07:32:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:49.927 { 00:17:49.927 "name": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:49.927 "aliases": [ 00:17:49.927 "lvs/nvme0n1p0" 00:17:49.927 ], 00:17:49.927 "product_name": "Logical Volume", 00:17:49.927 "block_size": 4096, 00:17:49.927 "num_blocks": 26476544, 00:17:49.927 "uuid": "98122e08-4357-4c88-a59f-ea3130d17e83", 00:17:49.927 "assigned_rate_limits": { 00:17:49.927 "rw_ios_per_sec": 0, 00:17:49.927 "rw_mbytes_per_sec": 0, 00:17:49.927 "r_mbytes_per_sec": 0, 00:17:49.927 "w_mbytes_per_sec": 0 00:17:49.927 }, 00:17:49.927 "claimed": false, 00:17:49.927 "zoned": false, 00:17:49.927 "supported_io_types": { 00:17:49.927 "read": true, 00:17:49.927 "write": true, 00:17:49.927 "unmap": true, 00:17:49.927 "write_zeroes": true, 00:17:49.927 "flush": false, 00:17:49.927 "reset": true, 00:17:49.927 "compare": false, 00:17:49.927 "compare_and_write": false, 00:17:49.927 "abort": false, 00:17:49.927 "nvme_admin": false, 00:17:49.927 "nvme_io": false 00:17:49.927 }, 00:17:49.927 "driver_specific": { 00:17:49.927 "lvol": { 00:17:49.927 "lvol_store_uuid": "685f1ed7-f614-4605-a0e7-1d55726cacb3", 00:17:49.927 "base_bdev": "nvme0n1", 00:17:49.927 "thin_provision": true, 00:17:49.927 "snapshot": false, 00:17:49.927 "clone": false, 00:17:49.927 "esnap_clone": false 00:17:49.927 } 00:17:49.927 } 00:17:49.927 } 00:17:49.927 ]' 00:17:49.927 07:32:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:49.927 07:32:59 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:49.927 07:32:59 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:49.927 07:32:59 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:49.927 07:32:59 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:49.927 07:32:59 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:49.927 07:32:59 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:49.927 07:32:59 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 98122e08-4357-4c88-a59f-ea3130d17e83 --l2p_dram_limit 10' 00:17:49.927 07:32:59 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:49.927 07:32:59 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:49.927 07:32:59 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:49.927 07:32:59 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:49.927 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:49.927 07:32:59 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 98122e08-4357-4c88-a59f-ea3130d17e83 --l2p_dram_limit 10 -c nvc0n1p0 00:17:50.185 [2024-11-19 07:32:59.198219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.198361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.185 [2024-11-19 07:32:59.198380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.185 [2024-11-19 07:32:59.198388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.198439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.198447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.185 [2024-11-19 07:32:59.198454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:50.185 [2024-11-19 07:32:59.198460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.198477] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.185 [2024-11-19 07:32:59.199110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.185 [2024-11-19 07:32:59.199125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.199131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.185 [2024-11-19 07:32:59.199139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:17:50.185 [2024-11-19 07:32:59.199145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.199210] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 37031644-9415-4ff1-9650-6b30c5b3b16d 00:17:50.185 [2024-11-19 07:32:59.200139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.200168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:50.185 [2024-11-19 07:32:59.200176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:50.185 [2024-11-19 07:32:59.200192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.204971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.204998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.185 [2024-11-19 07:32:59.205006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:17:50.185 [2024-11-19 07:32:59.205012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.205079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.205087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.185 [2024-11-19 07:32:59.205094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:50.185 [2024-11-19 07:32:59.205103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.205141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.205152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.185 [2024-11-19 07:32:59.205157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.185 [2024-11-19 07:32:59.205164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.185 [2024-11-19 07:32:59.205198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.185 [2024-11-19 07:32:59.208171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.185 [2024-11-19 07:32:59.208207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.185 [2024-11-19 07:32:59.208217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:17:50.185 [2024-11-19 07:32:59.208223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.186 [2024-11-19 07:32:59.208253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.186 [2024-11-19 07:32:59.208259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.186 [2024-11-19 07:32:59.208267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:50.186 [2024-11-19 07:32:59.208272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.186 [2024-11-19 07:32:59.208286] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:50.186 [2024-11-19 07:32:59.208372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:50.186 [2024-11-19 07:32:59.208384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.186 [2024-11-19 07:32:59.208391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:50.186 [2024-11-19 07:32:59.208400] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208406] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208415] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:50.186 [2024-11-19 07:32:59.208427] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.186 [2024-11-19 07:32:59.208434] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:50.186 [2024-11-19 07:32:59.208440] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:50.186 [2024-11-19 07:32:59.208447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.186 [2024-11-19 07:32:59.208452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.186 [2024-11-19 07:32:59.208460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:50.186 [2024-11-19 07:32:59.208465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.186 [2024-11-19 07:32:59.208518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.186 [2024-11-19 07:32:59.208524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.186 [2024-11-19 07:32:59.208530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:50.186 [2024-11-19 07:32:59.208537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.186 [2024-11-19 07:32:59.208593] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.186 [2024-11-19 07:32:59.208600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.186 [2024-11-19 07:32:59.208607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.186 [2024-11-19 07:32:59.208625] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.186 [2024-11-19 07:32:59.208643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.186 [2024-11-19 07:32:59.208654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.186 [2024-11-19 07:32:59.208660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:50.186 [2024-11-19 07:32:59.208667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.186 [2024-11-19 07:32:59.208673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.186 [2024-11-19 07:32:59.208679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:50.186 [2024-11-19 07:32:59.208684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.186 [2024-11-19 07:32:59.208696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:50.186 [2024-11-19 07:32:59.208702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:50.186 [2024-11-19 07:32:59.208713] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:50.186 [2024-11-19 07:32:59.208718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.186 [2024-11-19 07:32:59.208729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.186 [2024-11-19 07:32:59.208745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.186 [2024-11-19 07:32:59.208760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.186 [2024-11-19 07:32:59.208779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.186 [2024-11-19 07:32:59.208795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.186 [2024-11-19 07:32:59.208805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.186 [2024-11-19 07:32:59.208812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:50.186 [2024-11-19 07:32:59.208816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.186 [2024-11-19 07:32:59.208822] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.186 [2024-11-19 07:32:59.208827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.186 [2024-11-19 07:32:59.208834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.186 [2024-11-19 07:32:59.208846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.186 [2024-11-19 07:32:59.208852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.186 [2024-11-19 07:32:59.208858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.186 [2024-11-19 07:32:59.208864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.186 [2024-11-19 07:32:59.208871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.186 [2024-11-19 07:32:59.208875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.186 [2024-11-19 07:32:59.208882] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.186 [2024-11-19 07:32:59.208889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.186 [2024-11-19 07:32:59.208896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:50.186 [2024-11-19 07:32:59.208902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:50.186 [2024-11-19 07:32:59.208908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:50.186 [2024-11-19 07:32:59.208913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:50.186 [2024-11-19 07:32:59.208920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:50.186 [2024-11-19 07:32:59.208925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:50.186 [2024-11-19 07:32:59.208931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:50.186 [2024-11-19 07:32:59.208936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:50.186 [2024-11-19 07:32:59.208943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:50.186 [2024-11-19 07:32:59.208948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:50.186 [2024-11-19 07:32:59.208954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:50.186 [2024-11-19 07:32:59.208960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:50.186 [2024-11-19 07:32:59.208969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:50.186 [2024-11-19 07:32:59.208974] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.186 [2024-11-19 07:32:59.208982] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.186 [2024-11-19 07:32:59.208988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.186 [2024-11-19 07:32:59.208994] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.186 [2024-11-19 07:32:59.208999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.186 [2024-11-19 07:32:59.209006] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.186 [2024-11-19 07:32:59.209011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.186 [2024-11-19 07:32:59.209018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.186 [2024-11-19 07:32:59.209024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:17:50.186 [2024-11-19 07:32:59.209031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.186 [2024-11-19 07:32:59.221107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.221236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.187 [2024-11-19 07:32:59.221281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.035 ms 00:17:50.187 [2024-11-19 07:32:59.221302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.221381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.221400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.187 [2024-11-19 07:32:59.221417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:50.187 [2024-11-19 07:32:59.221432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.245405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.245502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.187 [2024-11-19 07:32:59.245551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.929 ms 00:17:50.187 [2024-11-19 07:32:59.245571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.245606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.245623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.187 [2024-11-19 07:32:59.245638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:50.187 [2024-11-19 07:32:59.245655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.245963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.246066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.187 [2024-11-19 07:32:59.246122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:50.187 [2024-11-19 07:32:59.246143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.246249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.246275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.187 [2024-11-19 07:32:59.246335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:50.187 [2024-11-19 07:32:59.246355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.258372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.258463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.187 [2024-11-19 07:32:59.258505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.991 ms 00:17:50.187 [2024-11-19 07:32:59.258524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.267479] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:50.187 [2024-11-19 07:32:59.269847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.269930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.187 [2024-11-19 07:32:59.269981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.254 ms 00:17:50.187 [2024-11-19 07:32:59.269999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.344631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.187 [2024-11-19 07:32:59.344769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:50.187 [2024-11-19 07:32:59.344791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.598 ms 00:17:50.187 [2024-11-19 07:32:59.344800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.187 [2024-11-19 07:32:59.344843] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:50.187 [2024-11-19 07:32:59.344854] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:53.467 [2024-11-19 07:33:02.246043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.246284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:53.467 [2024-11-19 07:33:02.246313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2901.182 ms 00:17:53.467 [2024-11-19 07:33:02.246323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.246498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.246508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.467 [2024-11-19 07:33:02.246521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:53.467 [2024-11-19 07:33:02.246529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.270565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.270601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:53.467 [2024-11-19 07:33:02.270614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.991 ms 00:17:53.467 [2024-11-19 07:33:02.270622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.293914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.294033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:53.467 [2024-11-19 07:33:02.294057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.267 ms 00:17:53.467 [2024-11-19 07:33:02.294063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.294379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.294389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:53.467 [2024-11-19 07:33:02.294398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:53.467 [2024-11-19 07:33:02.294405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.356863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.356989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:53.467 [2024-11-19 07:33:02.357010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.421 ms 00:17:53.467 [2024-11-19 07:33:02.357018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.381346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.381381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:53.467 [2024-11-19 07:33:02.381393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:17:53.467 [2024-11-19 07:33:02.381400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.382613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.382643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:53.467 [2024-11-19 07:33:02.382656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:17:53.467 [2024-11-19 07:33:02.382663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.407121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.407152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.467 [2024-11-19 07:33:02.407165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.423 ms 00:17:53.467 [2024-11-19 07:33:02.407172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.407227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.407237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.467 [2024-11-19 07:33:02.407247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:53.467 [2024-11-19 07:33:02.407254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.407341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.467 [2024-11-19 07:33:02.407351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.467 [2024-11-19 07:33:02.407360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:53.467 [2024-11-19 07:33:02.407367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.467 [2024-11-19 07:33:02.408165] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3209.539 ms, result 0 00:17:53.467 { 00:17:53.467 "name": "ftl0", 00:17:53.467 "uuid": "37031644-9415-4ff1-9650-6b30c5b3b16d" 00:17:53.467 } 00:17:53.467 07:33:02 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:53.467 07:33:02 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:53.467 07:33:02 -- ftl/restore.sh@63 -- # echo ']}' 00:17:53.467 07:33:02 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:53.727 [2024-11-19 07:33:02.795796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.795846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.727 [2024-11-19 07:33:02.795858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.727 [2024-11-19 07:33:02.795867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.795891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.727 [2024-11-19 07:33:02.798517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.798643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.727 [2024-11-19 07:33:02.798664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:17:53.727 [2024-11-19 07:33:02.798678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.798955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.798965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.727 [2024-11-19 07:33:02.798974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:53.727 [2024-11-19 07:33:02.798981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.802241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.802263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.727 [2024-11-19 07:33:02.802274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:17:53.727 [2024-11-19 07:33:02.802281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.808361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.808388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:53.727 [2024-11-19 07:33:02.808399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.059 ms 00:17:53.727 [2024-11-19 07:33:02.808406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.832868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.832900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.727 [2024-11-19 07:33:02.832912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.401 ms 00:17:53.727 [2024-11-19 07:33:02.832920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.848493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.848525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.727 [2024-11-19 07:33:02.848538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.533 ms 00:17:53.727 [2024-11-19 07:33:02.848546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.848691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.848701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.727 [2024-11-19 07:33:02.848711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:53.727 [2024-11-19 07:33:02.848720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.872621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.872650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:53.727 [2024-11-19 07:33:02.872663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.878 ms 00:17:53.727 [2024-11-19 07:33:02.872669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.896285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.896315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:53.727 [2024-11-19 07:33:02.896327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.578 ms 00:17:53.727 [2024-11-19 07:33:02.896333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.919000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.919030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.727 [2024-11-19 07:33:02.919041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.630 ms 00:17:53.727 [2024-11-19 07:33:02.919048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.942036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.727 [2024-11-19 07:33:02.942156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.727 [2024-11-19 07:33:02.942175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.918 ms 00:17:53.727 [2024-11-19 07:33:02.942197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.727 [2024-11-19 07:33:02.942231] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.727 [2024-11-19 07:33:02.942247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.727 [2024-11-19 07:33:02.942548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.942993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.728 [2024-11-19 07:33:02.943083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.728 [2024-11-19 07:33:02.943092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37031644-9415-4ff1-9650-6b30c5b3b16d 00:17:53.728 [2024-11-19 07:33:02.943100] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:53.728 [2024-11-19 07:33:02.943109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:53.728 [2024-11-19 07:33:02.943116] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:53.728 [2024-11-19 07:33:02.943124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:53.728 [2024-11-19 07:33:02.943131] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.728 [2024-11-19 07:33:02.943140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.728 [2024-11-19 07:33:02.943147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.728 [2024-11-19 07:33:02.943156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.728 [2024-11-19 07:33:02.943162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.728 [2024-11-19 07:33:02.943172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.728 [2024-11-19 07:33:02.943188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.728 [2024-11-19 07:33:02.943200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:17:53.728 [2024-11-19 07:33:02.943207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.728 [2024-11-19 07:33:02.955410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.728 [2024-11-19 07:33:02.955439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.728 [2024-11-19 07:33:02.955451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:17:53.728 [2024-11-19 07:33:02.955459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.728 [2024-11-19 07:33:02.955656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.728 [2024-11-19 07:33:02.955667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.728 [2024-11-19 07:33:02.955676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:17:53.728 [2024-11-19 07:33:02.955683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.010 [2024-11-19 07:33:03.000242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.010 [2024-11-19 07:33:03.000373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.010 [2024-11-19 07:33:03.000391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.010 [2024-11-19 07:33:03.000398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.010 [2024-11-19 07:33:03.000460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.010 [2024-11-19 07:33:03.000470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.011 [2024-11-19 07:33:03.000480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.000487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.000551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.000560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.011 [2024-11-19 07:33:03.000569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.000576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.000594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.000602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.011 [2024-11-19 07:33:03.000612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.000619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.075972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.076108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.011 [2024-11-19 07:33:03.076129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.076136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.011 [2024-11-19 07:33:03.105390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.011 [2024-11-19 07:33:03.105485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.011 [2024-11-19 07:33:03.105553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.011 [2024-11-19 07:33:03.105665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.011 [2024-11-19 07:33:03.105722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.011 [2024-11-19 07:33:03.105785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.011 [2024-11-19 07:33:03.105844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.011 [2024-11-19 07:33:03.105852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.011 [2024-11-19 07:33:03.105861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.011 [2024-11-19 07:33:03.105982] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.151 ms, result 0 00:17:54.011 true 00:17:54.011 07:33:03 -- ftl/restore.sh@66 -- # killprocess 72939 00:17:54.011 07:33:03 -- common/autotest_common.sh@936 -- # '[' -z 72939 ']' 00:17:54.011 07:33:03 -- common/autotest_common.sh@940 -- # kill -0 72939 00:17:54.011 07:33:03 -- common/autotest_common.sh@941 -- # uname 00:17:54.011 07:33:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.011 07:33:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72939 00:17:54.011 killing process with pid 72939 00:17:54.011 07:33:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:54.011 07:33:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:54.011 07:33:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72939' 00:17:54.011 07:33:03 -- common/autotest_common.sh@955 -- # kill 72939 00:17:54.011 07:33:03 -- common/autotest_common.sh@960 -- # wait 72939 00:18:00.608 07:33:08 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:03.890 262144+0 records in 00:18:03.890 262144+0 records out 00:18:03.890 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.09807 s, 262 MB/s 00:18:03.890 07:33:12 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:05.806 07:33:14 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.806 [2024-11-19 07:33:14.874646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.806 [2024-11-19 07:33:14.874908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73204 ] 00:18:05.806 [2024-11-19 07:33:15.016861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.066 [2024-11-19 07:33:15.190978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.323 [2024-11-19 07:33:15.442082] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.324 [2024-11-19 07:33:15.442143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.584 [2024-11-19 07:33:15.591987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.592031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.584 [2024-11-19 07:33:15.592044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.584 [2024-11-19 07:33:15.592054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.592095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.592105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.584 [2024-11-19 07:33:15.592113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:06.584 [2024-11-19 07:33:15.592121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.592136] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.584 [2024-11-19 07:33:15.592855] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.584 [2024-11-19 07:33:15.592874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.592881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.584 [2024-11-19 07:33:15.592889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:18:06.584 [2024-11-19 07:33:15.592895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.594035] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:06.584 [2024-11-19 07:33:15.606692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.606724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:06.584 [2024-11-19 07:33:15.606736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:18:06.584 [2024-11-19 07:33:15.606745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.606794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.606803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:06.584 [2024-11-19 07:33:15.606810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:06.584 [2024-11-19 07:33:15.606817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.611644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.611673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.584 [2024-11-19 07:33:15.611682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.772 ms 00:18:06.584 [2024-11-19 07:33:15.611689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.611764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.611773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.584 [2024-11-19 07:33:15.611781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:06.584 [2024-11-19 07:33:15.611788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.611830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.611839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.584 [2024-11-19 07:33:15.611846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.584 [2024-11-19 07:33:15.611853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.611879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.584 [2024-11-19 07:33:15.615382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.584 [2024-11-19 07:33:15.615514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.584 [2024-11-19 07:33:15.615529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.514 ms 00:18:06.584 [2024-11-19 07:33:15.615537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.584 [2024-11-19 07:33:15.615571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.585 [2024-11-19 07:33:15.615578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.585 [2024-11-19 07:33:15.615586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:06.585 [2024-11-19 07:33:15.615594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.585 [2024-11-19 07:33:15.615613] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:06.585 [2024-11-19 07:33:15.615631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:06.585 [2024-11-19 07:33:15.615661] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:06.585 [2024-11-19 07:33:15.615676] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:06.585 [2024-11-19 07:33:15.615747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:06.585 [2024-11-19 07:33:15.615757] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.585 [2024-11-19 07:33:15.615768] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:06.585 [2024-11-19 07:33:15.615778] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.585 [2024-11-19 07:33:15.615786] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.585 [2024-11-19 07:33:15.615794] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:06.585 [2024-11-19 07:33:15.615800] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.585 [2024-11-19 07:33:15.615808] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:06.585 [2024-11-19 07:33:15.615815] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:06.585 [2024-11-19 07:33:15.615822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.585 [2024-11-19 07:33:15.615828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.585 [2024-11-19 07:33:15.615835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:18:06.585 [2024-11-19 07:33:15.615842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.585 [2024-11-19 07:33:15.615906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.585 [2024-11-19 07:33:15.615914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.585 [2024-11-19 07:33:15.615921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:06.585 [2024-11-19 07:33:15.615927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.585 [2024-11-19 07:33:15.616006] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.585 [2024-11-19 07:33:15.616016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.585 [2024-11-19 07:33:15.616023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.585 [2024-11-19 07:33:15.616044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.585 [2024-11-19 07:33:15.616065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.585 [2024-11-19 07:33:15.616078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.585 [2024-11-19 07:33:15.616084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:06.585 [2024-11-19 07:33:15.616091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.585 [2024-11-19 07:33:15.616097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.585 [2024-11-19 07:33:15.616104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:06.585 [2024-11-19 07:33:15.616110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.585 [2024-11-19 07:33:15.616128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:06.585 [2024-11-19 07:33:15.616134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:06.585 [2024-11-19 07:33:15.616146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:06.585 [2024-11-19 07:33:15.616153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.585 [2024-11-19 07:33:15.616165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.585 [2024-11-19 07:33:15.616210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.585 [2024-11-19 07:33:15.616230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.585 [2024-11-19 07:33:15.616248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.585 [2024-11-19 07:33:15.616267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.585 [2024-11-19 07:33:15.616279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.585 [2024-11-19 07:33:15.616286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:06.585 [2024-11-19 07:33:15.616292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.585 [2024-11-19 07:33:15.616298] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.585 [2024-11-19 07:33:15.616308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.585 [2024-11-19 07:33:15.616315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.585 [2024-11-19 07:33:15.616331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.585 [2024-11-19 07:33:15.616338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.585 [2024-11-19 07:33:15.616344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.585 [2024-11-19 07:33:15.616351] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.585 [2024-11-19 07:33:15.616357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.585 [2024-11-19 07:33:15.616364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.585 [2024-11-19 07:33:15.616371] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.585 [2024-11-19 07:33:15.616381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.585 [2024-11-19 07:33:15.616389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:06.585 [2024-11-19 07:33:15.616396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:06.585 [2024-11-19 07:33:15.616403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:06.585 [2024-11-19 07:33:15.616410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:06.585 [2024-11-19 07:33:15.616416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:06.585 [2024-11-19 07:33:15.616423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:06.586 [2024-11-19 07:33:15.616429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:06.586 [2024-11-19 07:33:15.616436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:06.586 [2024-11-19 07:33:15.616443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:06.586 [2024-11-19 07:33:15.616449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:06.586 [2024-11-19 07:33:15.616456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:06.586 [2024-11-19 07:33:15.616463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:06.586 [2024-11-19 07:33:15.616470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:06.586 [2024-11-19 07:33:15.616477] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.586 [2024-11-19 07:33:15.616485] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.586 [2024-11-19 07:33:15.616492] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.586 [2024-11-19 07:33:15.616499] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.586 [2024-11-19 07:33:15.616506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.586 [2024-11-19 07:33:15.616514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.586 [2024-11-19 07:33:15.616522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.616529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.586 [2024-11-19 07:33:15.616536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:06.586 [2024-11-19 07:33:15.616542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.631520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.631628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.586 [2024-11-19 07:33:15.631676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.937 ms 00:18:06.586 [2024-11-19 07:33:15.631703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.631796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.631816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.586 [2024-11-19 07:33:15.631835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:06.586 [2024-11-19 07:33:15.631853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.673479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.673611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.586 [2024-11-19 07:33:15.673669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.572 ms 00:18:06.586 [2024-11-19 07:33:15.673691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.673742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.673766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.586 [2024-11-19 07:33:15.673785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.586 [2024-11-19 07:33:15.673803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.674161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.674227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.586 [2024-11-19 07:33:15.674250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:06.586 [2024-11-19 07:33:15.674274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.674451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.674477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.586 [2024-11-19 07:33:15.674496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:06.586 [2024-11-19 07:33:15.674515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.688402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.688510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.586 [2024-11-19 07:33:15.688557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.829 ms 00:18:06.586 [2024-11-19 07:33:15.688579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.701380] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:06.586 [2024-11-19 07:33:15.701509] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:06.586 [2024-11-19 07:33:15.701565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.701585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:06.586 [2024-11-19 07:33:15.701605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.879 ms 00:18:06.586 [2024-11-19 07:33:15.701623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.726432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.726545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:06.586 [2024-11-19 07:33:15.726595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.768 ms 00:18:06.586 [2024-11-19 07:33:15.726616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.738511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.738627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:06.586 [2024-11-19 07:33:15.738674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.852 ms 00:18:06.586 [2024-11-19 07:33:15.738695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.750419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.750526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:06.586 [2024-11-19 07:33:15.750581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.687 ms 00:18:06.586 [2024-11-19 07:33:15.750601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.751213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.751330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.586 [2024-11-19 07:33:15.751389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:06.586 [2024-11-19 07:33:15.751430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.809897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.810045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:06.586 [2024-11-19 07:33:15.810100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.434 ms 00:18:06.586 [2024-11-19 07:33:15.810122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.821079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:06.586 [2024-11-19 07:33:15.823450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.823549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.586 [2024-11-19 07:33:15.823602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.017 ms 00:18:06.586 [2024-11-19 07:33:15.823627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.586 [2024-11-19 07:33:15.823714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.586 [2024-11-19 07:33:15.823740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:06.587 [2024-11-19 07:33:15.823760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.587 [2024-11-19 07:33:15.823778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.587 [2024-11-19 07:33:15.823851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.587 [2024-11-19 07:33:15.823896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.587 [2024-11-19 07:33:15.823916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:06.587 [2024-11-19 07:33:15.823934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.587 [2024-11-19 07:33:15.825126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.587 [2024-11-19 07:33:15.825252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:06.587 [2024-11-19 07:33:15.825303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:18:06.587 [2024-11-19 07:33:15.825325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.587 [2024-11-19 07:33:15.825363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.587 [2024-11-19 07:33:15.825384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.587 [2024-11-19 07:33:15.825402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.587 [2024-11-19 07:33:15.825426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.587 [2024-11-19 07:33:15.825467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:06.587 [2024-11-19 07:33:15.825489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.587 [2024-11-19 07:33:15.825507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:06.587 [2024-11-19 07:33:15.825560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:06.587 [2024-11-19 07:33:15.825582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.845 [2024-11-19 07:33:15.849567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.845 [2024-11-19 07:33:15.849675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.845 [2024-11-19 07:33:15.849723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.954 ms 00:18:06.845 [2024-11-19 07:33:15.849744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.845 [2024-11-19 07:33:15.849814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.846 [2024-11-19 07:33:15.849843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.846 [2024-11-19 07:33:15.849862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:06.846 [2024-11-19 07:33:15.849881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.846 [2024-11-19 07:33:15.850789] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.390 ms, result 0 00:18:07.786  [2024-11-19T07:33:17.969Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-19T07:33:18.902Z] Copying: 44/1024 [MB] (23 MBps) [2024-11-19T07:33:20.273Z] Copying: 73/1024 [MB] (29 MBps) [2024-11-19T07:33:21.207Z] Copying: 100/1024 [MB] (27 MBps) [2024-11-19T07:33:22.142Z] Copying: 123/1024 [MB] (23 MBps) [2024-11-19T07:33:23.076Z] Copying: 141/1024 [MB] (18 MBps) [2024-11-19T07:33:24.014Z] Copying: 165/1024 [MB] (23 MBps) [2024-11-19T07:33:24.955Z] Copying: 187/1024 [MB] (22 MBps) [2024-11-19T07:33:25.892Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-19T07:33:27.265Z] Copying: 231/1024 [MB] (23 MBps) [2024-11-19T07:33:28.199Z] Copying: 250/1024 [MB] (19 MBps) [2024-11-19T07:33:29.131Z] Copying: 273/1024 [MB] (23 MBps) [2024-11-19T07:33:30.070Z] Copying: 297/1024 [MB] (23 MBps) [2024-11-19T07:33:31.003Z] Copying: 319/1024 [MB] (22 MBps) [2024-11-19T07:33:31.940Z] Copying: 333/1024 [MB] (14 MBps) [2024-11-19T07:33:32.875Z] Copying: 353/1024 [MB] (20 MBps) [2024-11-19T07:33:34.250Z] Copying: 377/1024 [MB] (23 MBps) [2024-11-19T07:33:35.183Z] Copying: 400/1024 [MB] (22 MBps) [2024-11-19T07:33:36.171Z] Copying: 415/1024 [MB] (15 MBps) [2024-11-19T07:33:37.104Z] Copying: 429/1024 [MB] (13 MBps) [2024-11-19T07:33:38.039Z] Copying: 441/1024 [MB] (12 MBps) [2024-11-19T07:33:38.973Z] Copying: 455/1024 [MB] (13 MBps) [2024-11-19T07:33:39.906Z] Copying: 472/1024 [MB] (16 MBps) [2024-11-19T07:33:41.280Z] Copying: 486/1024 [MB] (14 MBps) [2024-11-19T07:33:42.213Z] Copying: 508/1024 [MB] (21 MBps) [2024-11-19T07:33:43.147Z] Copying: 535/1024 [MB] (27 MBps) [2024-11-19T07:33:44.082Z] Copying: 559/1024 [MB] (23 MBps) [2024-11-19T07:33:45.015Z] Copying: 581/1024 [MB] (22 MBps) [2024-11-19T07:33:45.948Z] Copying: 605/1024 [MB] (23 MBps) [2024-11-19T07:33:46.882Z] Copying: 628/1024 [MB] (23 MBps) [2024-11-19T07:33:48.256Z] Copying: 645/1024 [MB] (16 MBps) [2024-11-19T07:33:49.199Z] Copying: 659/1024 [MB] (14 MBps) [2024-11-19T07:33:50.134Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-19T07:33:51.069Z] Copying: 689/1024 [MB] (18 MBps) [2024-11-19T07:33:52.075Z] Copying: 710/1024 [MB] (20 MBps) [2024-11-19T07:33:53.010Z] Copying: 730/1024 [MB] (20 MBps) [2024-11-19T07:33:53.944Z] Copying: 749/1024 [MB] (18 MBps) [2024-11-19T07:33:54.877Z] Copying: 771/1024 [MB] (21 MBps) [2024-11-19T07:33:56.250Z] Copying: 788/1024 [MB] (16 MBps) [2024-11-19T07:33:57.185Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-19T07:33:58.117Z] Copying: 812/1024 [MB] (12 MBps) [2024-11-19T07:33:59.051Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-19T07:33:59.988Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-19T07:34:00.922Z] Copying: 848/1024 [MB] (12 MBps) [2024-11-19T07:34:02.295Z] Copying: 865/1024 [MB] (16 MBps) [2024-11-19T07:34:03.228Z] Copying: 877/1024 [MB] (12 MBps) [2024-11-19T07:34:04.162Z] Copying: 889/1024 [MB] (12 MBps) [2024-11-19T07:34:05.097Z] Copying: 910/1024 [MB] (20 MBps) [2024-11-19T07:34:06.060Z] Copying: 924/1024 [MB] (14 MBps) [2024-11-19T07:34:06.997Z] Copying: 941/1024 [MB] (17 MBps) [2024-11-19T07:34:07.931Z] Copying: 954/1024 [MB] (12 MBps) [2024-11-19T07:34:08.866Z] Copying: 966/1024 [MB] (12 MBps) [2024-11-19T07:34:10.252Z] Copying: 978/1024 [MB] (12 MBps) [2024-11-19T07:34:11.185Z] Copying: 993/1024 [MB] (15 MBps) [2024-11-19T07:34:11.751Z] Copying: 1007/1024 [MB] (13 MBps) [2024-11-19T07:34:11.751Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 07:34:11.635506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.501 [2024-11-19 07:34:11.635550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:02.501 [2024-11-19 07:34:11.635564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:02.501 [2024-11-19 07:34:11.635571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.501 [2024-11-19 07:34:11.635592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.501 [2024-11-19 07:34:11.638115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.501 [2024-11-19 07:34:11.638140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:02.502 [2024-11-19 07:34:11.638155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:19:02.502 [2024-11-19 07:34:11.638163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.639838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.639955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:02.502 [2024-11-19 07:34:11.639971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:19:02.502 [2024-11-19 07:34:11.639979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.655077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.655198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.502 [2024-11-19 07:34:11.655213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.082 ms 00:19:02.502 [2024-11-19 07:34:11.655226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.661312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.661340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:02.502 [2024-11-19 07:34:11.661350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.059 ms 00:19:02.502 [2024-11-19 07:34:11.661357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.685982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.686015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.502 [2024-11-19 07:34:11.686025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.572 ms 00:19:02.502 [2024-11-19 07:34:11.686032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.700655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.700687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.502 [2024-11-19 07:34:11.700698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.592 ms 00:19:02.502 [2024-11-19 07:34:11.700705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.700839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.700849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.502 [2024-11-19 07:34:11.700857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:02.502 [2024-11-19 07:34:11.700864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.725040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.725165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:02.502 [2024-11-19 07:34:11.725195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.162 ms 00:19:02.502 [2024-11-19 07:34:11.725202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.502 [2024-11-19 07:34:11.748476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.502 [2024-11-19 07:34:11.748506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:02.502 [2024-11-19 07:34:11.748516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.211 ms 00:19:02.502 [2024-11-19 07:34:11.748531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.762 [2024-11-19 07:34:11.771298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.762 [2024-11-19 07:34:11.771413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.762 [2024-11-19 07:34:11.771427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:19:02.762 [2024-11-19 07:34:11.771435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.762 [2024-11-19 07:34:11.794276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.762 [2024-11-19 07:34:11.794395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.762 [2024-11-19 07:34:11.794409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.783 ms 00:19:02.762 [2024-11-19 07:34:11.794416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.762 [2024-11-19 07:34:11.794442] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.762 [2024-11-19 07:34:11.794455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.794996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.763 [2024-11-19 07:34:11.795106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.764 [2024-11-19 07:34:11.795227] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.764 [2024-11-19 07:34:11.795234] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37031644-9415-4ff1-9650-6b30c5b3b16d 00:19:02.764 [2024-11-19 07:34:11.795242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.764 [2024-11-19 07:34:11.795249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.764 [2024-11-19 07:34:11.795256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.764 [2024-11-19 07:34:11.795264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.764 [2024-11-19 07:34:11.795270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.764 [2024-11-19 07:34:11.795277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.764 [2024-11-19 07:34:11.795284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.764 [2024-11-19 07:34:11.795290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.764 [2024-11-19 07:34:11.795303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.764 [2024-11-19 07:34:11.795309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.764 [2024-11-19 07:34:11.795317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.764 [2024-11-19 07:34:11.795325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:19:02.764 [2024-11-19 07:34:11.795334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.807633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.764 [2024-11-19 07:34:11.807663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.764 [2024-11-19 07:34:11.807673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:19:02.764 [2024-11-19 07:34:11.807680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.807871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.764 [2024-11-19 07:34:11.807879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.764 [2024-11-19 07:34:11.807891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:19:02.764 [2024-11-19 07:34:11.807897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.843016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.843051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.764 [2024-11-19 07:34:11.843060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.843068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.843118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.843126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.764 [2024-11-19 07:34:11.843137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.843144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.843218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.843228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.764 [2024-11-19 07:34:11.843236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.843243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.843257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.843264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.764 [2024-11-19 07:34:11.843271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.843281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.915592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.915631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.764 [2024-11-19 07:34:11.915641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.915648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.944777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.944810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.764 [2024-11-19 07:34:11.944820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.944831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.944884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.944892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.764 [2024-11-19 07:34:11.944900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.944907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.944944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.944953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.764 [2024-11-19 07:34:11.944961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.944968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.945052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.945062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.764 [2024-11-19 07:34:11.945070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.945077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.945103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.945111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.764 [2024-11-19 07:34:11.945118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.945125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.945160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.945168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.764 [2024-11-19 07:34:11.945176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.945207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.945247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.764 [2024-11-19 07:34:11.945255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.764 [2024-11-19 07:34:11.945263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.764 [2024-11-19 07:34:11.945270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.764 [2024-11-19 07:34:11.945399] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.865 ms, result 0 00:19:04.138 00:19:04.138 00:19:04.138 07:34:13 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:04.138 [2024-11-19 07:34:13.081126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.138 [2024-11-19 07:34:13.081260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73810 ] 00:19:04.138 [2024-11-19 07:34:13.230807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.396 [2024-11-19 07:34:13.411031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.655 [2024-11-19 07:34:13.662673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.655 [2024-11-19 07:34:13.662735] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.655 [2024-11-19 07:34:13.812706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.655 [2024-11-19 07:34:13.812883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.655 [2024-11-19 07:34:13.812904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.655 [2024-11-19 07:34:13.812916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.655 [2024-11-19 07:34:13.812974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.655 [2024-11-19 07:34:13.812984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.655 [2024-11-19 07:34:13.812992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:04.655 [2024-11-19 07:34:13.812999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.655 [2024-11-19 07:34:13.813019] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.656 [2024-11-19 07:34:13.813756] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.656 [2024-11-19 07:34:13.813773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.813780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.656 [2024-11-19 07:34:13.813789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:19:04.656 [2024-11-19 07:34:13.813797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.814836] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.656 [2024-11-19 07:34:13.827721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.827848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.656 [2024-11-19 07:34:13.827866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.886 ms 00:19:04.656 [2024-11-19 07:34:13.827873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.827923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.827932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.656 [2024-11-19 07:34:13.827940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:04.656 [2024-11-19 07:34:13.827947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.832871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.832901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.656 [2024-11-19 07:34:13.832911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:19:04.656 [2024-11-19 07:34:13.832918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.833000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.833009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.656 [2024-11-19 07:34:13.833017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:04.656 [2024-11-19 07:34:13.833024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.833065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.833074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.656 [2024-11-19 07:34:13.833082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:04.656 [2024-11-19 07:34:13.833089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.833115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.656 [2024-11-19 07:34:13.836612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.836638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.656 [2024-11-19 07:34:13.836647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:19:04.656 [2024-11-19 07:34:13.836653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.836685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.836692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.656 [2024-11-19 07:34:13.836700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:04.656 [2024-11-19 07:34:13.836709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.836727] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.656 [2024-11-19 07:34:13.836744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:04.656 [2024-11-19 07:34:13.836775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.656 [2024-11-19 07:34:13.836789] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:04.656 [2024-11-19 07:34:13.836861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:04.656 [2024-11-19 07:34:13.836870] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.656 [2024-11-19 07:34:13.836882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:04.656 [2024-11-19 07:34:13.836892] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.656 [2024-11-19 07:34:13.836900] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.656 [2024-11-19 07:34:13.836907] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.656 [2024-11-19 07:34:13.836914] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.656 [2024-11-19 07:34:13.836921] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:04.656 [2024-11-19 07:34:13.836927] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:04.656 [2024-11-19 07:34:13.836936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.836943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.656 [2024-11-19 07:34:13.836950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:04.656 [2024-11-19 07:34:13.836956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.837016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.656 [2024-11-19 07:34:13.837024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.656 [2024-11-19 07:34:13.837031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:04.656 [2024-11-19 07:34:13.837037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.656 [2024-11-19 07:34:13.837115] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.656 [2024-11-19 07:34:13.837125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.656 [2024-11-19 07:34:13.837133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.656 [2024-11-19 07:34:13.837154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.656 [2024-11-19 07:34:13.837175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.656 [2024-11-19 07:34:13.837200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.656 [2024-11-19 07:34:13.837206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.656 [2024-11-19 07:34:13.837215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.656 [2024-11-19 07:34:13.837222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.656 [2024-11-19 07:34:13.837229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:04.656 [2024-11-19 07:34:13.837235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.656 [2024-11-19 07:34:13.837255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:04.656 [2024-11-19 07:34:13.837261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:04.656 [2024-11-19 07:34:13.837274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:04.656 [2024-11-19 07:34:13.837280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.656 [2024-11-19 07:34:13.837302] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.656 [2024-11-19 07:34:13.837321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.656 [2024-11-19 07:34:13.837340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.656 [2024-11-19 07:34:13.837359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:04.656 [2024-11-19 07:34:13.837365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.656 [2024-11-19 07:34:13.837372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.656 [2024-11-19 07:34:13.837378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.657 [2024-11-19 07:34:13.837384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.657 [2024-11-19 07:34:13.837390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.657 [2024-11-19 07:34:13.837397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:04.657 [2024-11-19 07:34:13.837403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.657 [2024-11-19 07:34:13.837409] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.657 [2024-11-19 07:34:13.837418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.657 [2024-11-19 07:34:13.837425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.657 [2024-11-19 07:34:13.837433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.657 [2024-11-19 07:34:13.837441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.657 [2024-11-19 07:34:13.837449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.657 [2024-11-19 07:34:13.837455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.657 [2024-11-19 07:34:13.837462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.657 [2024-11-19 07:34:13.837468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.657 [2024-11-19 07:34:13.837475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.657 [2024-11-19 07:34:13.837482] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.657 [2024-11-19 07:34:13.837490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.657 [2024-11-19 07:34:13.837498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.657 [2024-11-19 07:34:13.837506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:04.657 [2024-11-19 07:34:13.837513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:04.657 [2024-11-19 07:34:13.837519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:04.657 [2024-11-19 07:34:13.837526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:04.657 [2024-11-19 07:34:13.837533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:04.657 [2024-11-19 07:34:13.837540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:04.657 [2024-11-19 07:34:13.837547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:04.657 [2024-11-19 07:34:13.837554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:04.657 [2024-11-19 07:34:13.837560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:04.657 [2024-11-19 07:34:13.837567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:04.657 [2024-11-19 07:34:13.837574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:04.657 [2024-11-19 07:34:13.837582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:04.657 [2024-11-19 07:34:13.837589] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.657 [2024-11-19 07:34:13.837597] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.657 [2024-11-19 07:34:13.837604] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.657 [2024-11-19 07:34:13.837611] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.657 [2024-11-19 07:34:13.837618] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.657 [2024-11-19 07:34:13.837627] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.657 [2024-11-19 07:34:13.837634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.837641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.657 [2024-11-19 07:34:13.837648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:19:04.657 [2024-11-19 07:34:13.837655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.852570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.852604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.657 [2024-11-19 07:34:13.852615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.875 ms 00:19:04.657 [2024-11-19 07:34:13.852626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.852709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.852717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.657 [2024-11-19 07:34:13.852725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:04.657 [2024-11-19 07:34:13.852732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.897725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.897765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.657 [2024-11-19 07:34:13.897777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.949 ms 00:19:04.657 [2024-11-19 07:34:13.897785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.897826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.897835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.657 [2024-11-19 07:34:13.897843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:04.657 [2024-11-19 07:34:13.897849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.898205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.898225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.657 [2024-11-19 07:34:13.898233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:04.657 [2024-11-19 07:34:13.898244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.657 [2024-11-19 07:34:13.898352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.657 [2024-11-19 07:34:13.898364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.657 [2024-11-19 07:34:13.898372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:04.657 [2024-11-19 07:34:13.898379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.916 [2024-11-19 07:34:13.912079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.916 [2024-11-19 07:34:13.912110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.916 [2024-11-19 07:34:13.912120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.681 ms 00:19:04.917 [2024-11-19 07:34:13.912126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:13.924667] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:04.917 [2024-11-19 07:34:13.924699] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.917 [2024-11-19 07:34:13.924710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:13.924717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.917 [2024-11-19 07:34:13.924726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:19:04.917 [2024-11-19 07:34:13.924732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:13.949025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:13.949058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.917 [2024-11-19 07:34:13.949068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:19:04.917 [2024-11-19 07:34:13.949075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:13.960977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:13.961007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.917 [2024-11-19 07:34:13.961017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.866 ms 00:19:04.917 [2024-11-19 07:34:13.961023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:13.972735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:13.972771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.917 [2024-11-19 07:34:13.972781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.677 ms 00:19:04.917 [2024-11-19 07:34:13.972787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:13.973139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:13.973151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.917 [2024-11-19 07:34:13.973159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:04.917 [2024-11-19 07:34:13.973166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.032061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.032242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.917 [2024-11-19 07:34:14.032261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.863 ms 00:19:04.917 [2024-11-19 07:34:14.032269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.042926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:04.917 [2024-11-19 07:34:14.045357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.045386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.917 [2024-11-19 07:34:14.045398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:19:04.917 [2024-11-19 07:34:14.045410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.045474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.045485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.917 [2024-11-19 07:34:14.045493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.917 [2024-11-19 07:34:14.045500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.045558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.045567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.917 [2024-11-19 07:34:14.045575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:04.917 [2024-11-19 07:34:14.045582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.046732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.046761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:04.917 [2024-11-19 07:34:14.046770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:19:04.917 [2024-11-19 07:34:14.046777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.046804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.046811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.917 [2024-11-19 07:34:14.046823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.917 [2024-11-19 07:34:14.046830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.046859] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.917 [2024-11-19 07:34:14.046869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.046878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.917 [2024-11-19 07:34:14.046885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:04.917 [2024-11-19 07:34:14.046892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.070953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.070986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.917 [2024-11-19 07:34:14.070997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.043 ms 00:19:04.917 [2024-11-19 07:34:14.071004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.071076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.917 [2024-11-19 07:34:14.071085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.917 [2024-11-19 07:34:14.071093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:04.917 [2024-11-19 07:34:14.071099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.917 [2024-11-19 07:34:14.071957] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.849 ms, result 0 00:19:06.291  [2024-11-19T07:34:16.475Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T07:34:17.410Z] Copying: 26/1024 [MB] (13 MBps) [2024-11-19T07:34:18.345Z] Copying: 38/1024 [MB] (12 MBps) [2024-11-19T07:34:19.278Z] Copying: 51/1024 [MB] (12 MBps) [2024-11-19T07:34:20.674Z] Copying: 63/1024 [MB] (12 MBps) [2024-11-19T07:34:21.608Z] Copying: 75/1024 [MB] (12 MBps) [2024-11-19T07:34:22.543Z] Copying: 88/1024 [MB] (12 MBps) [2024-11-19T07:34:23.477Z] Copying: 100/1024 [MB] (12 MBps) [2024-11-19T07:34:24.411Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-19T07:34:25.343Z] Copying: 125/1024 [MB] (12 MBps) [2024-11-19T07:34:26.277Z] Copying: 137/1024 [MB] (12 MBps) [2024-11-19T07:34:27.650Z] Copying: 150/1024 [MB] (12 MBps) [2024-11-19T07:34:28.585Z] Copying: 162/1024 [MB] (12 MBps) [2024-11-19T07:34:29.518Z] Copying: 174/1024 [MB] (12 MBps) [2024-11-19T07:34:30.451Z] Copying: 186/1024 [MB] (12 MBps) [2024-11-19T07:34:31.383Z] Copying: 198/1024 [MB] (12 MBps) [2024-11-19T07:34:32.317Z] Copying: 210/1024 [MB] (12 MBps) [2024-11-19T07:34:33.252Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-19T07:34:34.638Z] Copying: 241/1024 [MB] (12 MBps) [2024-11-19T07:34:35.572Z] Copying: 253/1024 [MB] (12 MBps) [2024-11-19T07:34:36.505Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-19T07:34:37.440Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-19T07:34:38.375Z] Copying: 289/1024 [MB] (12 MBps) [2024-11-19T07:34:39.311Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-19T07:34:40.244Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-19T07:34:41.618Z] Copying: 324/1024 [MB] (12 MBps) [2024-11-19T07:34:42.552Z] Copying: 336/1024 [MB] (12 MBps) [2024-11-19T07:34:43.486Z] Copying: 349/1024 [MB] (12 MBps) [2024-11-19T07:34:44.420Z] Copying: 361/1024 [MB] (11 MBps) [2024-11-19T07:34:45.353Z] Copying: 373/1024 [MB] (12 MBps) [2024-11-19T07:34:46.343Z] Copying: 386/1024 [MB] (12 MBps) [2024-11-19T07:34:47.339Z] Copying: 398/1024 [MB] (12 MBps) [2024-11-19T07:34:48.274Z] Copying: 411/1024 [MB] (12 MBps) [2024-11-19T07:34:49.654Z] Copying: 423/1024 [MB] (12 MBps) [2024-11-19T07:34:50.586Z] Copying: 435/1024 [MB] (11 MBps) [2024-11-19T07:34:51.520Z] Copying: 447/1024 [MB] (12 MBps) [2024-11-19T07:34:52.455Z] Copying: 460/1024 [MB] (12 MBps) [2024-11-19T07:34:53.388Z] Copying: 472/1024 [MB] (12 MBps) [2024-11-19T07:34:54.321Z] Copying: 484/1024 [MB] (12 MBps) [2024-11-19T07:34:55.257Z] Copying: 496/1024 [MB] (11 MBps) [2024-11-19T07:34:56.629Z] Copying: 508/1024 [MB] (11 MBps) [2024-11-19T07:34:57.563Z] Copying: 520/1024 [MB] (12 MBps) [2024-11-19T07:34:58.499Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-19T07:34:59.431Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-19T07:35:00.365Z] Copying: 557/1024 [MB] (12 MBps) [2024-11-19T07:35:01.301Z] Copying: 569/1024 [MB] (12 MBps) [2024-11-19T07:35:02.244Z] Copying: 583/1024 [MB] (14 MBps) [2024-11-19T07:35:03.629Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-19T07:35:04.567Z] Copying: 606/1024 [MB] (11 MBps) [2024-11-19T07:35:05.502Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-19T07:35:06.436Z] Copying: 641/1024 [MB] (23 MBps) [2024-11-19T07:35:07.370Z] Copying: 665/1024 [MB] (23 MBps) [2024-11-19T07:35:08.303Z] Copying: 687/1024 [MB] (22 MBps) [2024-11-19T07:35:09.675Z] Copying: 710/1024 [MB] (22 MBps) [2024-11-19T07:35:10.608Z] Copying: 724/1024 [MB] (14 MBps) [2024-11-19T07:35:11.542Z] Copying: 739/1024 [MB] (14 MBps) [2024-11-19T07:35:12.480Z] Copying: 751/1024 [MB] (12 MBps) [2024-11-19T07:35:13.471Z] Copying: 766/1024 [MB] (14 MBps) [2024-11-19T07:35:14.406Z] Copying: 787/1024 [MB] (21 MBps) [2024-11-19T07:35:15.341Z] Copying: 815/1024 [MB] (27 MBps) [2024-11-19T07:35:16.275Z] Copying: 844/1024 [MB] (28 MBps) [2024-11-19T07:35:17.647Z] Copying: 863/1024 [MB] (19 MBps) [2024-11-19T07:35:18.617Z] Copying: 876/1024 [MB] (13 MBps) [2024-11-19T07:35:19.550Z] Copying: 890/1024 [MB] (13 MBps) [2024-11-19T07:35:20.505Z] Copying: 903/1024 [MB] (13 MBps) [2024-11-19T07:35:21.438Z] Copying: 916/1024 [MB] (12 MBps) [2024-11-19T07:35:22.372Z] Copying: 930/1024 [MB] (13 MBps) [2024-11-19T07:35:23.306Z] Copying: 943/1024 [MB] (13 MBps) [2024-11-19T07:35:24.680Z] Copying: 956/1024 [MB] (12 MBps) [2024-11-19T07:35:25.246Z] Copying: 970/1024 [MB] (13 MBps) [2024-11-19T07:35:26.644Z] Copying: 982/1024 [MB] (12 MBps) [2024-11-19T07:35:27.579Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-19T07:35:27.579Z] Copying: 1016/1024 [MB] (20 MBps) [2024-11-19T07:35:27.843Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-19 07:35:27.731882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.732141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.593 [2024-11-19 07:35:27.732332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.593 [2024-11-19 07:35:27.732430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.732495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.593 [2024-11-19 07:35:27.735654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.735782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.593 [2024-11-19 07:35:27.735845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:18.593 [2024-11-19 07:35:27.735873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.736199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.736240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.593 [2024-11-19 07:35:27.736647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:18.593 [2024-11-19 07:35:27.736671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.742767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.742867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.593 [2024-11-19 07:35:27.742924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.063 ms 00:20:18.593 [2024-11-19 07:35:27.742945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.749047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.749145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:18.593 [2024-11-19 07:35:27.749206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.070 ms 00:20:18.593 [2024-11-19 07:35:27.749229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.773716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.773826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.593 [2024-11-19 07:35:27.773874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.413 ms 00:20:18.593 [2024-11-19 07:35:27.773895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.788916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.789025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.593 [2024-11-19 07:35:27.789073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.981 ms 00:20:18.593 [2024-11-19 07:35:27.789101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.789521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.789582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.593 [2024-11-19 07:35:27.789606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:18.593 [2024-11-19 07:35:27.789699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.813828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.813862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:18.593 [2024-11-19 07:35:27.813873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.107 ms 00:20:18.593 [2024-11-19 07:35:27.813880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.593 [2024-11-19 07:35:27.837145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.593 [2024-11-19 07:35:27.837176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:18.593 [2024-11-19 07:35:27.837208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.234 ms 00:20:18.593 [2024-11-19 07:35:27.837215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.854 [2024-11-19 07:35:27.860351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.854 [2024-11-19 07:35:27.860382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.854 [2024-11-19 07:35:27.860392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:20:18.854 [2024-11-19 07:35:27.860399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.854 [2024-11-19 07:35:27.883344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.854 [2024-11-19 07:35:27.883459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.854 [2024-11-19 07:35:27.883475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.882 ms 00:20:18.854 [2024-11-19 07:35:27.883482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.854 [2024-11-19 07:35:27.883508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.854 [2024-11-19 07:35:27.883526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.854 [2024-11-19 07:35:27.883979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.883986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.883993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-11-19 07:35:27.884285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.855 [2024-11-19 07:35:27.884293] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37031644-9415-4ff1-9650-6b30c5b3b16d 00:20:18.855 [2024-11-19 07:35:27.884300] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.855 [2024-11-19 07:35:27.884307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.855 [2024-11-19 07:35:27.884314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.855 [2024-11-19 07:35:27.884321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.855 [2024-11-19 07:35:27.884328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.855 [2024-11-19 07:35:27.884335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.855 [2024-11-19 07:35:27.884342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.855 [2024-11-19 07:35:27.884354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.855 [2024-11-19 07:35:27.884360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.855 [2024-11-19 07:35:27.884367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-11-19 07:35:27.884374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.855 [2024-11-19 07:35:27.884385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:20:18.855 [2024-11-19 07:35:27.884392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.896713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-11-19 07:35:27.896742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.855 [2024-11-19 07:35:27.896751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:20:18.855 [2024-11-19 07:35:27.896758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.896950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-11-19 07:35:27.896963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.855 [2024-11-19 07:35:27.896970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:20:18.855 [2024-11-19 07:35:27.896977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.932270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:27.932302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.855 [2024-11-19 07:35:27.932311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:27.932319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.932366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:27.932378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.855 [2024-11-19 07:35:27.932386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:27.932393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.932452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:27.932462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.855 [2024-11-19 07:35:27.932469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:27.932476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:27.932490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:27.932497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.855 [2024-11-19 07:35:27.932507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:27.932514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.006511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.006661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.855 [2024-11-19 07:35:28.006676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.006684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.035766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.035799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.855 [2024-11-19 07:35:28.035813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.035820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.035869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.035877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.855 [2024-11-19 07:35:28.035884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.035892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.035929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.035937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.855 [2024-11-19 07:35:28.035945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.035954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.036041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.036050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.855 [2024-11-19 07:35:28.036058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.036065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.036091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.036099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.855 [2024-11-19 07:35:28.036106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.855 [2024-11-19 07:35:28.036113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-11-19 07:35:28.036149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.855 [2024-11-19 07:35:28.036157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.855 [2024-11-19 07:35:28.036165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.856 [2024-11-19 07:35:28.036172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.856 [2024-11-19 07:35:28.036237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.856 [2024-11-19 07:35:28.036247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.856 [2024-11-19 07:35:28.036254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.856 [2024-11-19 07:35:28.036264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.856 [2024-11-19 07:35:28.036368] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 304.468 ms, result 0 00:20:19.791 00:20:19.791 00:20:19.791 07:35:28 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:22.321 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:22.321 07:35:30 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:22.321 [2024-11-19 07:35:31.049472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.321 [2024-11-19 07:35:31.049582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74627 ] 00:20:22.321 [2024-11-19 07:35:31.197719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.321 [2024-11-19 07:35:31.370094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.582 [2024-11-19 07:35:31.621109] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.582 [2024-11-19 07:35:31.621174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.582 [2024-11-19 07:35:31.772011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.772060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.582 [2024-11-19 07:35:31.772073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.582 [2024-11-19 07:35:31.772083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.772129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.772139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.582 [2024-11-19 07:35:31.772147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:22.582 [2024-11-19 07:35:31.772154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.772173] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.582 [2024-11-19 07:35:31.772912] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.582 [2024-11-19 07:35:31.772932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.772940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.582 [2024-11-19 07:35:31.772948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:20:22.582 [2024-11-19 07:35:31.772955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.774112] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.582 [2024-11-19 07:35:31.786628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.786662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.582 [2024-11-19 07:35:31.786674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.517 ms 00:20:22.582 [2024-11-19 07:35:31.786681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.786732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.786741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.582 [2024-11-19 07:35:31.786749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:22.582 [2024-11-19 07:35:31.786756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.791744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.791775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.582 [2024-11-19 07:35:31.791785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:20:22.582 [2024-11-19 07:35:31.791792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.791867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.791876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.582 [2024-11-19 07:35:31.791883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:22.582 [2024-11-19 07:35:31.791890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.791935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.791944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.582 [2024-11-19 07:35:31.791951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:22.582 [2024-11-19 07:35:31.791958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.791984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.582 [2024-11-19 07:35:31.795492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.795519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.582 [2024-11-19 07:35:31.795529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.519 ms 00:20:22.582 [2024-11-19 07:35:31.795535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.795565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.795572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.582 [2024-11-19 07:35:31.795580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:22.582 [2024-11-19 07:35:31.795589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.795608] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.582 [2024-11-19 07:35:31.795625] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:22.582 [2024-11-19 07:35:31.795656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.582 [2024-11-19 07:35:31.795670] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:22.582 [2024-11-19 07:35:31.795742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:22.582 [2024-11-19 07:35:31.795752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.582 [2024-11-19 07:35:31.795763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:22.582 [2024-11-19 07:35:31.795773] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.582 [2024-11-19 07:35:31.795781] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.582 [2024-11-19 07:35:31.795789] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:22.582 [2024-11-19 07:35:31.795796] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.582 [2024-11-19 07:35:31.795803] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:22.582 [2024-11-19 07:35:31.795810] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:22.582 [2024-11-19 07:35:31.795817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.795824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.582 [2024-11-19 07:35:31.795831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:22.582 [2024-11-19 07:35:31.795838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.795898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.582 [2024-11-19 07:35:31.795906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.582 [2024-11-19 07:35:31.795913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:22.582 [2024-11-19 07:35:31.795919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.582 [2024-11-19 07:35:31.795998] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.582 [2024-11-19 07:35:31.796008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.582 [2024-11-19 07:35:31.796016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.582 [2024-11-19 07:35:31.796023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.582 [2024-11-19 07:35:31.796031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.582 [2024-11-19 07:35:31.796037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.582 [2024-11-19 07:35:31.796044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:22.582 [2024-11-19 07:35:31.796051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.582 [2024-11-19 07:35:31.796058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:22.582 [2024-11-19 07:35:31.796064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.582 [2024-11-19 07:35:31.796073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.582 [2024-11-19 07:35:31.796079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:22.582 [2024-11-19 07:35:31.796085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.582 [2024-11-19 07:35:31.796092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.583 [2024-11-19 07:35:31.796098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:22.583 [2024-11-19 07:35:31.796104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.583 [2024-11-19 07:35:31.796123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:22.583 [2024-11-19 07:35:31.796129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:22.583 [2024-11-19 07:35:31.796142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:22.583 [2024-11-19 07:35:31.796149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.583 [2024-11-19 07:35:31.796162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.583 [2024-11-19 07:35:31.796200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.583 [2024-11-19 07:35:31.796220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.583 [2024-11-19 07:35:31.796239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.583 [2024-11-19 07:35:31.796258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.583 [2024-11-19 07:35:31.796271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.583 [2024-11-19 07:35:31.796277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:22.583 [2024-11-19 07:35:31.796283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.583 [2024-11-19 07:35:31.796289] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.583 [2024-11-19 07:35:31.796298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.583 [2024-11-19 07:35:31.796306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.583 [2024-11-19 07:35:31.796321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.583 [2024-11-19 07:35:31.796327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.583 [2024-11-19 07:35:31.796333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.583 [2024-11-19 07:35:31.796340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.583 [2024-11-19 07:35:31.796346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.583 [2024-11-19 07:35:31.796352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.583 [2024-11-19 07:35:31.796360] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.583 [2024-11-19 07:35:31.796369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.583 [2024-11-19 07:35:31.796377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:22.583 [2024-11-19 07:35:31.796385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:22.583 [2024-11-19 07:35:31.796392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:22.583 [2024-11-19 07:35:31.796399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:22.583 [2024-11-19 07:35:31.796406] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:22.583 [2024-11-19 07:35:31.796412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:22.583 [2024-11-19 07:35:31.796419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:22.583 [2024-11-19 07:35:31.796426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:22.583 [2024-11-19 07:35:31.796433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:22.583 [2024-11-19 07:35:31.796439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:22.583 [2024-11-19 07:35:31.796446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:22.583 [2024-11-19 07:35:31.796453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:22.583 [2024-11-19 07:35:31.796460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:22.583 [2024-11-19 07:35:31.796467] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.583 [2024-11-19 07:35:31.796475] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.583 [2024-11-19 07:35:31.796482] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.583 [2024-11-19 07:35:31.796489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.583 [2024-11-19 07:35:31.796496] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.583 [2024-11-19 07:35:31.796503] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.583 [2024-11-19 07:35:31.796511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.583 [2024-11-19 07:35:31.796518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.583 [2024-11-19 07:35:31.796525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:22.583 [2024-11-19 07:35:31.796532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.583 [2024-11-19 07:35:31.811493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.583 [2024-11-19 07:35:31.811524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.583 [2024-11-19 07:35:31.811534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.921 ms 00:20:22.583 [2024-11-19 07:35:31.811544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.583 [2024-11-19 07:35:31.811627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.583 [2024-11-19 07:35:31.811634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.583 [2024-11-19 07:35:31.811642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:22.583 [2024-11-19 07:35:31.811649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.851508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.851659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.843 [2024-11-19 07:35:31.851677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.816 ms 00:20:22.843 [2024-11-19 07:35:31.851686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.851727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.851736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.843 [2024-11-19 07:35:31.851744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.843 [2024-11-19 07:35:31.851751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.852103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.852118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.843 [2024-11-19 07:35:31.852127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:20:22.843 [2024-11-19 07:35:31.852138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.852268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.852278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.843 [2024-11-19 07:35:31.852286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:22.843 [2024-11-19 07:35:31.852293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.866015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.866046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.843 [2024-11-19 07:35:31.866056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.702 ms 00:20:22.843 [2024-11-19 07:35:31.866063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.879003] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:22.843 [2024-11-19 07:35:31.879037] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.843 [2024-11-19 07:35:31.879047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.879054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.843 [2024-11-19 07:35:31.879063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:20:22.843 [2024-11-19 07:35:31.879070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.903631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.903664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.843 [2024-11-19 07:35:31.903674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.523 ms 00:20:22.843 [2024-11-19 07:35:31.903681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.915696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.915727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.843 [2024-11-19 07:35:31.915736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.978 ms 00:20:22.843 [2024-11-19 07:35:31.915743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.927430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.927467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.843 [2024-11-19 07:35:31.927476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.651 ms 00:20:22.843 [2024-11-19 07:35:31.927482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.927832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.927843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.843 [2024-11-19 07:35:31.927851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:22.843 [2024-11-19 07:35:31.927858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.985708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.985847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.843 [2024-11-19 07:35:31.985868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.835 ms 00:20:22.843 [2024-11-19 07:35:31.985876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.996588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:22.843 [2024-11-19 07:35:31.998945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.998976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.843 [2024-11-19 07:35:31.998987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.032 ms 00:20:22.843 [2024-11-19 07:35:31.998999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.999059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.999070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.843 [2024-11-19 07:35:31.999080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.843 [2024-11-19 07:35:31.999087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:31.999145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:31.999155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.843 [2024-11-19 07:35:31.999163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:22.843 [2024-11-19 07:35:31.999170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.000324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:32.000351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:22.843 [2024-11-19 07:35:32.000360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:20:22.843 [2024-11-19 07:35:32.000368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.000394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:32.000402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.843 [2024-11-19 07:35:32.000413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.843 [2024-11-19 07:35:32.000420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.000449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.843 [2024-11-19 07:35:32.000458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:32.000467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.843 [2024-11-19 07:35:32.000474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.843 [2024-11-19 07:35:32.000480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.024309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:32.024342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.843 [2024-11-19 07:35:32.024353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.811 ms 00:20:22.843 [2024-11-19 07:35:32.024361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.024429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.843 [2024-11-19 07:35:32.024438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.843 [2024-11-19 07:35:32.024446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:22.843 [2024-11-19 07:35:32.024453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.843 [2024-11-19 07:35:32.025322] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 252.894 ms, result 0 00:20:24.216  [2024-11-19T07:35:34.400Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T07:35:35.332Z] Copying: 27/1024 [MB] (14 MBps) [2024-11-19T07:35:36.267Z] Copying: 77/1024 [MB] (50 MBps) [2024-11-19T07:35:37.200Z] Copying: 96/1024 [MB] (19 MBps) [2024-11-19T07:35:38.134Z] Copying: 116/1024 [MB] (19 MBps) [2024-11-19T07:35:39.069Z] Copying: 132/1024 [MB] (15 MBps) [2024-11-19T07:35:40.474Z] Copying: 152/1024 [MB] (20 MBps) [2024-11-19T07:35:41.041Z] Copying: 171/1024 [MB] (18 MBps) [2024-11-19T07:35:42.416Z] Copying: 194/1024 [MB] (23 MBps) [2024-11-19T07:35:43.352Z] Copying: 220/1024 [MB] (25 MBps) [2024-11-19T07:35:44.286Z] Copying: 241/1024 [MB] (20 MBps) [2024-11-19T07:35:45.221Z] Copying: 262/1024 [MB] (20 MBps) [2024-11-19T07:35:46.155Z] Copying: 286/1024 [MB] (23 MBps) [2024-11-19T07:35:47.090Z] Copying: 309/1024 [MB] (23 MBps) [2024-11-19T07:35:48.463Z] Copying: 328/1024 [MB] (18 MBps) [2024-11-19T07:35:49.395Z] Copying: 354/1024 [MB] (25 MBps) [2024-11-19T07:35:50.327Z] Copying: 381/1024 [MB] (27 MBps) [2024-11-19T07:35:51.263Z] Copying: 404/1024 [MB] (23 MBps) [2024-11-19T07:35:52.195Z] Copying: 424/1024 [MB] (19 MBps) [2024-11-19T07:35:53.234Z] Copying: 439/1024 [MB] (15 MBps) [2024-11-19T07:35:54.172Z] Copying: 454/1024 [MB] (14 MBps) [2024-11-19T07:35:55.105Z] Copying: 470/1024 [MB] (15 MBps) [2024-11-19T07:35:56.038Z] Copying: 483/1024 [MB] (12 MBps) [2024-11-19T07:35:57.411Z] Copying: 499/1024 [MB] (16 MBps) [2024-11-19T07:35:58.344Z] Copying: 520/1024 [MB] (20 MBps) [2024-11-19T07:35:59.278Z] Copying: 533/1024 [MB] (12 MBps) [2024-11-19T07:36:00.212Z] Copying: 545/1024 [MB] (12 MBps) [2024-11-19T07:36:01.145Z] Copying: 557/1024 [MB] (12 MBps) [2024-11-19T07:36:02.078Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-19T07:36:03.456Z] Copying: 584/1024 [MB] (12 MBps) [2024-11-19T07:36:04.390Z] Copying: 596/1024 [MB] (12 MBps) [2024-11-19T07:36:05.324Z] Copying: 608/1024 [MB] (12 MBps) [2024-11-19T07:36:06.327Z] Copying: 660/1024 [MB] (51 MBps) [2024-11-19T07:36:07.262Z] Copying: 699/1024 [MB] (39 MBps) [2024-11-19T07:36:08.196Z] Copying: 717/1024 [MB] (18 MBps) [2024-11-19T07:36:09.130Z] Copying: 732/1024 [MB] (14 MBps) [2024-11-19T07:36:10.062Z] Copying: 750/1024 [MB] (18 MBps) [2024-11-19T07:36:11.436Z] Copying: 767/1024 [MB] (17 MBps) [2024-11-19T07:36:12.370Z] Copying: 788/1024 [MB] (20 MBps) [2024-11-19T07:36:13.302Z] Copying: 813/1024 [MB] (24 MBps) [2024-11-19T07:36:14.235Z] Copying: 835/1024 [MB] (21 MBps) [2024-11-19T07:36:15.168Z] Copying: 856/1024 [MB] (20 MBps) [2024-11-19T07:36:16.101Z] Copying: 876/1024 [MB] (20 MBps) [2024-11-19T07:36:17.034Z] Copying: 895/1024 [MB] (18 MBps) [2024-11-19T07:36:18.406Z] Copying: 916/1024 [MB] (20 MBps) [2024-11-19T07:36:19.362Z] Copying: 940/1024 [MB] (24 MBps) [2024-11-19T07:36:20.296Z] Copying: 960/1024 [MB] (19 MBps) [2024-11-19T07:36:21.229Z] Copying: 985/1024 [MB] (25 MBps) [2024-11-19T07:36:22.163Z] Copying: 1006/1024 [MB] (20 MBps) [2024-11-19T07:36:22.728Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-19T07:36:22.728Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-19 07:36:22.700468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.478 [2024-11-19 07:36:22.700531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:13.478 [2024-11-19 07:36:22.700545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:13.478 [2024-11-19 07:36:22.700553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.478 [2024-11-19 07:36:22.701939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.478 [2024-11-19 07:36:22.705804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.478 [2024-11-19 07:36:22.705944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:13.478 [2024-11-19 07:36:22.705961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:21:13.478 [2024-11-19 07:36:22.705968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.478 [2024-11-19 07:36:22.717619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.479 [2024-11-19 07:36:22.717661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:13.479 [2024-11-19 07:36:22.717679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.021 ms 00:21:13.479 [2024-11-19 07:36:22.717687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.737 [2024-11-19 07:36:22.741925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.737 [2024-11-19 07:36:22.741957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:13.737 [2024-11-19 07:36:22.741967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:21:13.737 [2024-11-19 07:36:22.741974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.737 [2024-11-19 07:36:22.748048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.737 [2024-11-19 07:36:22.748075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:13.737 [2024-11-19 07:36:22.748085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.052 ms 00:21:13.737 [2024-11-19 07:36:22.748097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.737 [2024-11-19 07:36:22.772423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.737 [2024-11-19 07:36:22.772455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:13.737 [2024-11-19 07:36:22.772466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.286 ms 00:21:13.737 [2024-11-19 07:36:22.772473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.737 [2024-11-19 07:36:22.787119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.737 [2024-11-19 07:36:22.787149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:13.737 [2024-11-19 07:36:22.787160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.616 ms 00:21:13.737 [2024-11-19 07:36:22.787167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.042642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.997 [2024-11-19 07:36:23.042790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:13.997 [2024-11-19 07:36:23.042809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 255.425 ms 00:21:13.997 [2024-11-19 07:36:23.042818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.067048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.997 [2024-11-19 07:36:23.067082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:13.997 [2024-11-19 07:36:23.067093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.205 ms 00:21:13.997 [2024-11-19 07:36:23.067100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.090560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.997 [2024-11-19 07:36:23.090591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:13.997 [2024-11-19 07:36:23.090609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.428 ms 00:21:13.997 [2024-11-19 07:36:23.090615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.113886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.997 [2024-11-19 07:36:23.114007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:13.997 [2024-11-19 07:36:23.114022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.240 ms 00:21:13.997 [2024-11-19 07:36:23.114028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.136993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.997 [2024-11-19 07:36:23.137103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:13.997 [2024-11-19 07:36:23.137117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.906 ms 00:21:13.997 [2024-11-19 07:36:23.137124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.997 [2024-11-19 07:36:23.137150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:13.997 [2024-11-19 07:36:23.137163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99072 / 261120 wr_cnt: 1 state: open 00:21:13.997 [2024-11-19 07:36:23.137173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:13.997 [2024-11-19 07:36:23.137469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:13.998 [2024-11-19 07:36:23.137960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:13.998 [2024-11-19 07:36:23.137967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37031644-9415-4ff1-9650-6b30c5b3b16d 00:21:13.998 [2024-11-19 07:36:23.137975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99072 00:21:13.998 [2024-11-19 07:36:23.137982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100032 00:21:13.998 [2024-11-19 07:36:23.137989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99072 00:21:13.998 [2024-11-19 07:36:23.137999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:21:13.998 [2024-11-19 07:36:23.138007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:13.998 [2024-11-19 07:36:23.138014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:13.998 [2024-11-19 07:36:23.138022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:13.998 [2024-11-19 07:36:23.138033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:13.998 [2024-11-19 07:36:23.138039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:13.998 [2024-11-19 07:36:23.138046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.998 [2024-11-19 07:36:23.138054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:13.998 [2024-11-19 07:36:23.138061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:21:13.998 [2024-11-19 07:36:23.138068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.998 [2024-11-19 07:36:23.150500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.998 [2024-11-19 07:36:23.150533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:13.998 [2024-11-19 07:36:23.150542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.406 ms 00:21:13.998 [2024-11-19 07:36:23.150549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.998 [2024-11-19 07:36:23.150742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.998 [2024-11-19 07:36:23.150751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:13.998 [2024-11-19 07:36:23.150759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:21:13.998 [2024-11-19 07:36:23.150765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.998 [2024-11-19 07:36:23.185960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.998 [2024-11-19 07:36:23.185995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.998 [2024-11-19 07:36:23.186005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.998 [2024-11-19 07:36:23.186013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.998 [2024-11-19 07:36:23.186068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.998 [2024-11-19 07:36:23.186076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.998 [2024-11-19 07:36:23.186084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.998 [2024-11-19 07:36:23.186090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.999 [2024-11-19 07:36:23.186148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.999 [2024-11-19 07:36:23.186162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.999 [2024-11-19 07:36:23.186170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.999 [2024-11-19 07:36:23.186177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.999 [2024-11-19 07:36:23.186211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.999 [2024-11-19 07:36:23.186218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.999 [2024-11-19 07:36:23.186226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.999 [2024-11-19 07:36:23.186233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.259438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.259479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.257 [2024-11-19 07:36:23.259489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.259496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.288788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.288821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.257 [2024-11-19 07:36:23.288831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.288839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.288886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.288894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.257 [2024-11-19 07:36:23.288907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.288914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.288951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.288959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.257 [2024-11-19 07:36:23.288967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.288974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.289058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.289068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.257 [2024-11-19 07:36:23.289075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.289085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.289113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.289121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.257 [2024-11-19 07:36:23.289129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.289136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.289168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.289176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.257 [2024-11-19 07:36:23.289209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.289234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.289276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.257 [2024-11-19 07:36:23.289285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.257 [2024-11-19 07:36:23.289292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.257 [2024-11-19 07:36:23.289299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.257 [2024-11-19 07:36:23.289416] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 589.406 ms, result 0 00:21:15.631 00:21:15.631 00:21:15.631 07:36:24 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:15.631 [2024-11-19 07:36:24.535015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:15.631 [2024-11-19 07:36:24.535129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:21:15.631 [2024-11-19 07:36:24.685382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.631 [2024-11-19 07:36:24.858273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.889 [2024-11-19 07:36:25.106859] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.890 [2024-11-19 07:36:25.106914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.149 [2024-11-19 07:36:25.263631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.149 [2024-11-19 07:36:25.263784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:16.149 [2024-11-19 07:36:25.263803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.149 [2024-11-19 07:36:25.263815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.149 [2024-11-19 07:36:25.263867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.149 [2024-11-19 07:36:25.263877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.149 [2024-11-19 07:36:25.263886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:16.149 [2024-11-19 07:36:25.263893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.149 [2024-11-19 07:36:25.263911] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:16.149 [2024-11-19 07:36:25.264874] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:16.149 [2024-11-19 07:36:25.264920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.149 [2024-11-19 07:36:25.264929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.149 [2024-11-19 07:36:25.264939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:21:16.149 [2024-11-19 07:36:25.264946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.149 [2024-11-19 07:36:25.266018] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:16.149 [2024-11-19 07:36:25.278840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.149 [2024-11-19 07:36:25.278958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:16.150 [2024-11-19 07:36:25.278974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.822 ms 00:21:16.150 [2024-11-19 07:36:25.278982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.279026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.279035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:16.150 [2024-11-19 07:36:25.279043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:16.150 [2024-11-19 07:36:25.279050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.283889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.283917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.150 [2024-11-19 07:36:25.283926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:21:16.150 [2024-11-19 07:36:25.283933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.284009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.284018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.150 [2024-11-19 07:36:25.284026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:16.150 [2024-11-19 07:36:25.284033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.284076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.284086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:16.150 [2024-11-19 07:36:25.284093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:16.150 [2024-11-19 07:36:25.284100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.284126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.150 [2024-11-19 07:36:25.287528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.287553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.150 [2024-11-19 07:36:25.287562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:21:16.150 [2024-11-19 07:36:25.287569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.287598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.287606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:16.150 [2024-11-19 07:36:25.287614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:16.150 [2024-11-19 07:36:25.287623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.287641] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:16.150 [2024-11-19 07:36:25.287658] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:16.150 [2024-11-19 07:36:25.287689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:16.150 [2024-11-19 07:36:25.287703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:16.150 [2024-11-19 07:36:25.287775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:16.150 [2024-11-19 07:36:25.287785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:16.150 [2024-11-19 07:36:25.287796] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:16.150 [2024-11-19 07:36:25.287806] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:16.150 [2024-11-19 07:36:25.287814] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:16.150 [2024-11-19 07:36:25.287822] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:16.150 [2024-11-19 07:36:25.287829] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:16.150 [2024-11-19 07:36:25.287836] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:16.150 [2024-11-19 07:36:25.287843] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:16.150 [2024-11-19 07:36:25.287850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.287857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:16.150 [2024-11-19 07:36:25.287865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:16.150 [2024-11-19 07:36:25.287871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.287931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.150 [2024-11-19 07:36:25.287939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:16.150 [2024-11-19 07:36:25.287945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:16.150 [2024-11-19 07:36:25.287952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.150 [2024-11-19 07:36:25.288029] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:16.150 [2024-11-19 07:36:25.288038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:16.150 [2024-11-19 07:36:25.288046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:16.150 [2024-11-19 07:36:25.288067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:16.150 [2024-11-19 07:36:25.288089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.150 [2024-11-19 07:36:25.288102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:16.150 [2024-11-19 07:36:25.288108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:16.150 [2024-11-19 07:36:25.288115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.150 [2024-11-19 07:36:25.288122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:16.150 [2024-11-19 07:36:25.288129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:16.150 [2024-11-19 07:36:25.288135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:16.150 [2024-11-19 07:36:25.288154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:16.150 [2024-11-19 07:36:25.288160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:16.150 [2024-11-19 07:36:25.288173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:16.150 [2024-11-19 07:36:25.288197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:16.150 [2024-11-19 07:36:25.288211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:16.150 [2024-11-19 07:36:25.288230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:16.150 [2024-11-19 07:36:25.288250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:16.150 [2024-11-19 07:36:25.288269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:16.150 [2024-11-19 07:36:25.288290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.150 [2024-11-19 07:36:25.288303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:16.150 [2024-11-19 07:36:25.288312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:16.150 [2024-11-19 07:36:25.288318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.150 [2024-11-19 07:36:25.288325] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:16.150 [2024-11-19 07:36:25.288334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:16.150 [2024-11-19 07:36:25.288341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.150 [2024-11-19 07:36:25.288356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:16.150 [2024-11-19 07:36:25.288362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:16.150 [2024-11-19 07:36:25.288369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:16.150 [2024-11-19 07:36:25.288375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:16.150 [2024-11-19 07:36:25.288381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:16.150 [2024-11-19 07:36:25.288388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:16.150 [2024-11-19 07:36:25.288395] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:16.150 [2024-11-19 07:36:25.288404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.150 [2024-11-19 07:36:25.288412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:16.150 [2024-11-19 07:36:25.288419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:16.150 [2024-11-19 07:36:25.288426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:16.151 [2024-11-19 07:36:25.288433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:16.151 [2024-11-19 07:36:25.288440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:16.151 [2024-11-19 07:36:25.288447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:16.151 [2024-11-19 07:36:25.288454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:16.151 [2024-11-19 07:36:25.288461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:16.151 [2024-11-19 07:36:25.288468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:16.151 [2024-11-19 07:36:25.288475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:16.151 [2024-11-19 07:36:25.288481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:16.151 [2024-11-19 07:36:25.288488] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:16.151 [2024-11-19 07:36:25.288495] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:16.151 [2024-11-19 07:36:25.288503] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:16.151 [2024-11-19 07:36:25.288511] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.151 [2024-11-19 07:36:25.288523] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:16.151 [2024-11-19 07:36:25.288530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:16.151 [2024-11-19 07:36:25.288537] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:16.151 [2024-11-19 07:36:25.288545] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:16.151 [2024-11-19 07:36:25.288552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.288558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:16.151 [2024-11-19 07:36:25.288565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:21:16.151 [2024-11-19 07:36:25.288572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.303099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.303130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.151 [2024-11-19 07:36:25.303140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:21:16.151 [2024-11-19 07:36:25.303151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.303247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.303255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.151 [2024-11-19 07:36:25.303263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:16.151 [2024-11-19 07:36:25.303270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.346198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.346331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.151 [2024-11-19 07:36:25.346349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.888 ms 00:21:16.151 [2024-11-19 07:36:25.346357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.346395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.346404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.151 [2024-11-19 07:36:25.346413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:16.151 [2024-11-19 07:36:25.346420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.346752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.346775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.151 [2024-11-19 07:36:25.346784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:16.151 [2024-11-19 07:36:25.346795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.346901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.346909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.151 [2024-11-19 07:36:25.346917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:16.151 [2024-11-19 07:36:25.346924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.360510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.360539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.151 [2024-11-19 07:36:25.360548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.567 ms 00:21:16.151 [2024-11-19 07:36:25.360555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.373092] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:16.151 [2024-11-19 07:36:25.373124] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:16.151 [2024-11-19 07:36:25.373134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.373142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:16.151 [2024-11-19 07:36:25.373151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.497 ms 00:21:16.151 [2024-11-19 07:36:25.373158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.151 [2024-11-19 07:36:25.397634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.151 [2024-11-19 07:36:25.397664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:16.151 [2024-11-19 07:36:25.397674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:21:16.151 [2024-11-19 07:36:25.397681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.410 [2024-11-19 07:36:25.409534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.410 [2024-11-19 07:36:25.409562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:16.410 [2024-11-19 07:36:25.409572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.818 ms 00:21:16.410 [2024-11-19 07:36:25.409578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.410 [2024-11-19 07:36:25.420702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.410 [2024-11-19 07:36:25.420815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:16.410 [2024-11-19 07:36:25.420837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.094 ms 00:21:16.410 [2024-11-19 07:36:25.420844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.410 [2024-11-19 07:36:25.421202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.410 [2024-11-19 07:36:25.421215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:16.410 [2024-11-19 07:36:25.421224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:16.410 [2024-11-19 07:36:25.421231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.410 [2024-11-19 07:36:25.479452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.410 [2024-11-19 07:36:25.479585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:16.410 [2024-11-19 07:36:25.479601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.205 ms 00:21:16.410 [2024-11-19 07:36:25.479609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.410 [2024-11-19 07:36:25.490209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:16.411 [2024-11-19 07:36:25.492392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.492419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.411 [2024-11-19 07:36:25.492430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:21:16.411 [2024-11-19 07:36:25.492441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.492492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.492503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:16.411 [2024-11-19 07:36:25.492512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:16.411 [2024-11-19 07:36:25.492520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.493501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.493524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.411 [2024-11-19 07:36:25.493533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:21:16.411 [2024-11-19 07:36:25.493540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.494672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.494697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:16.411 [2024-11-19 07:36:25.494706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:21:16.411 [2024-11-19 07:36:25.494713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.494739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.494747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.411 [2024-11-19 07:36:25.494759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.411 [2024-11-19 07:36:25.494766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.494795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:16.411 [2024-11-19 07:36:25.494804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.494814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:16.411 [2024-11-19 07:36:25.494821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:16.411 [2024-11-19 07:36:25.494829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.518414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.518444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.411 [2024-11-19 07:36:25.518454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.569 ms 00:21:16.411 [2024-11-19 07:36:25.518462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.518527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.411 [2024-11-19 07:36:25.518536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.411 [2024-11-19 07:36:25.518544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:16.411 [2024-11-19 07:36:25.518551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.411 [2024-11-19 07:36:25.524617] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.097 ms, result 0 00:21:17.784  [2024-11-19T07:36:27.970Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T07:36:28.903Z] Copying: 28/1024 [MB] (16 MBps) [2024-11-19T07:36:29.837Z] Copying: 47/1024 [MB] (18 MBps) [2024-11-19T07:36:30.771Z] Copying: 60/1024 [MB] (12 MBps) [2024-11-19T07:36:31.706Z] Copying: 73/1024 [MB] (12 MBps) [2024-11-19T07:36:33.079Z] Copying: 85/1024 [MB] (12 MBps) [2024-11-19T07:36:34.012Z] Copying: 97/1024 [MB] (12 MBps) [2024-11-19T07:36:34.946Z] Copying: 119/1024 [MB] (21 MBps) [2024-11-19T07:36:35.880Z] Copying: 131/1024 [MB] (12 MBps) [2024-11-19T07:36:36.813Z] Copying: 143/1024 [MB] (12 MBps) [2024-11-19T07:36:37.746Z] Copying: 161/1024 [MB] (17 MBps) [2024-11-19T07:36:39.121Z] Copying: 175/1024 [MB] (14 MBps) [2024-11-19T07:36:40.061Z] Copying: 188/1024 [MB] (12 MBps) [2024-11-19T07:36:40.996Z] Copying: 201/1024 [MB] (12 MBps) [2024-11-19T07:36:41.937Z] Copying: 221/1024 [MB] (20 MBps) [2024-11-19T07:36:42.879Z] Copying: 243/1024 [MB] (22 MBps) [2024-11-19T07:36:43.824Z] Copying: 261/1024 [MB] (17 MBps) [2024-11-19T07:36:44.766Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-19T07:36:45.763Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-19T07:36:46.698Z] Copying: 317/1024 [MB] (19 MBps) [2024-11-19T07:36:48.072Z] Copying: 330/1024 [MB] (13 MBps) [2024-11-19T07:36:49.004Z] Copying: 356/1024 [MB] (26 MBps) [2024-11-19T07:36:49.936Z] Copying: 372/1024 [MB] (15 MBps) [2024-11-19T07:36:50.870Z] Copying: 390/1024 [MB] (18 MBps) [2024-11-19T07:36:51.881Z] Copying: 404/1024 [MB] (14 MBps) [2024-11-19T07:36:52.814Z] Copying: 428/1024 [MB] (23 MBps) [2024-11-19T07:36:53.749Z] Copying: 452/1024 [MB] (24 MBps) [2024-11-19T07:36:55.123Z] Copying: 481/1024 [MB] (29 MBps) [2024-11-19T07:36:56.062Z] Copying: 498/1024 [MB] (16 MBps) [2024-11-19T07:36:56.995Z] Copying: 517/1024 [MB] (19 MBps) [2024-11-19T07:36:57.930Z] Copying: 530/1024 [MB] (13 MBps) [2024-11-19T07:36:58.865Z] Copying: 544/1024 [MB] (13 MBps) [2024-11-19T07:36:59.799Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-19T07:37:00.732Z] Copying: 571/1024 [MB] (13 MBps) [2024-11-19T07:37:02.107Z] Copying: 586/1024 [MB] (15 MBps) [2024-11-19T07:37:03.041Z] Copying: 603/1024 [MB] (16 MBps) [2024-11-19T07:37:03.975Z] Copying: 626/1024 [MB] (23 MBps) [2024-11-19T07:37:04.909Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-19T07:37:05.844Z] Copying: 652/1024 [MB] (12 MBps) [2024-11-19T07:37:06.780Z] Copying: 665/1024 [MB] (13 MBps) [2024-11-19T07:37:07.717Z] Copying: 677/1024 [MB] (12 MBps) [2024-11-19T07:37:09.095Z] Copying: 690/1024 [MB] (12 MBps) [2024-11-19T07:37:10.032Z] Copying: 703/1024 [MB] (13 MBps) [2024-11-19T07:37:10.969Z] Copying: 715/1024 [MB] (12 MBps) [2024-11-19T07:37:11.903Z] Copying: 728/1024 [MB] (12 MBps) [2024-11-19T07:37:12.917Z] Copying: 741/1024 [MB] (13 MBps) [2024-11-19T07:37:13.857Z] Copying: 755/1024 [MB] (13 MBps) [2024-11-19T07:37:14.793Z] Copying: 768/1024 [MB] (13 MBps) [2024-11-19T07:37:15.727Z] Copying: 781/1024 [MB] (13 MBps) [2024-11-19T07:37:17.106Z] Copying: 794/1024 [MB] (13 MBps) [2024-11-19T07:37:18.042Z] Copying: 807/1024 [MB] (13 MBps) [2024-11-19T07:37:18.977Z] Copying: 820/1024 [MB] (12 MBps) [2024-11-19T07:37:19.911Z] Copying: 833/1024 [MB] (12 MBps) [2024-11-19T07:37:20.847Z] Copying: 845/1024 [MB] (12 MBps) [2024-11-19T07:37:21.788Z] Copying: 857/1024 [MB] (12 MBps) [2024-11-19T07:37:22.730Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-19T07:37:24.110Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-19T07:37:25.048Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-19T07:37:25.980Z] Copying: 904/1024 [MB] (12 MBps) [2024-11-19T07:37:26.926Z] Copying: 916/1024 [MB] (12 MBps) [2024-11-19T07:37:27.869Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-19T07:37:28.814Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-19T07:37:29.764Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-19T07:37:30.708Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-19T07:37:32.097Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-19T07:37:33.040Z] Copying: 981/1024 [MB] (10 MBps) [2024-11-19T07:37:33.982Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-19T07:37:34.927Z] Copying: 1007/1024 [MB] (15 MBps) [2024-11-19T07:37:34.927Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 07:37:34.815421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.815541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:25.677 [2024-11-19 07:37:34.815592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:25.677 [2024-11-19 07:37:34.815610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.815659] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:25.677 [2024-11-19 07:37:34.819462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.819506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:25.677 [2024-11-19 07:37:34.819519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:22:25.677 [2024-11-19 07:37:34.819527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.819804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.819817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:25.677 [2024-11-19 07:37:34.819832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:22:25.677 [2024-11-19 07:37:34.819841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.826392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.826441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:25.677 [2024-11-19 07:37:34.826452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.532 ms 00:22:25.677 [2024-11-19 07:37:34.826461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.833762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.833816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:25.677 [2024-11-19 07:37:34.833829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.148 ms 00:22:25.677 [2024-11-19 07:37:34.833846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.862054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.862105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:25.677 [2024-11-19 07:37:34.862118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.147 ms 00:22:25.677 [2024-11-19 07:37:34.862127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.677 [2024-11-19 07:37:34.878101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.677 [2024-11-19 07:37:34.878149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:25.677 [2024-11-19 07:37:34.878164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.924 ms 00:22:25.677 [2024-11-19 07:37:34.878174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.255669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.253 [2024-11-19 07:37:35.255871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:26.253 [2024-11-19 07:37:35.255896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 377.430 ms 00:22:26.253 [2024-11-19 07:37:35.255907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.282203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.253 [2024-11-19 07:37:35.282392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:26.253 [2024-11-19 07:37:35.282414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.264 ms 00:22:26.253 [2024-11-19 07:37:35.282424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.308444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.253 [2024-11-19 07:37:35.308493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:26.253 [2024-11-19 07:37:35.308506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.953 ms 00:22:26.253 [2024-11-19 07:37:35.308526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.333642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.253 [2024-11-19 07:37:35.333825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:26.253 [2024-11-19 07:37:35.333845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.069 ms 00:22:26.253 [2024-11-19 07:37:35.333853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.358887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.253 [2024-11-19 07:37:35.358935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:26.253 [2024-11-19 07:37:35.358947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:22:26.253 [2024-11-19 07:37:35.358955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.253 [2024-11-19 07:37:35.358998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:26.253 [2024-11-19 07:37:35.359016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:26.253 [2024-11-19 07:37:35.359026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:26.253 [2024-11-19 07:37:35.359390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:26.254 [2024-11-19 07:37:35.359855] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:26.254 [2024-11-19 07:37:35.359862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 37031644-9415-4ff1-9650-6b30c5b3b16d 00:22:26.254 [2024-11-19 07:37:35.359870] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:26.254 [2024-11-19 07:37:35.359877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35520 00:22:26.254 [2024-11-19 07:37:35.359886] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34560 00:22:26.254 [2024-11-19 07:37:35.359901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0278 00:22:26.254 [2024-11-19 07:37:35.359908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:26.254 [2024-11-19 07:37:35.359917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:26.254 [2024-11-19 07:37:35.359924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:26.254 [2024-11-19 07:37:35.359931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:26.254 [2024-11-19 07:37:35.359945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:26.254 [2024-11-19 07:37:35.359953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.254 [2024-11-19 07:37:35.359961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:26.254 [2024-11-19 07:37:35.359969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:22:26.254 [2024-11-19 07:37:35.359976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.373436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.254 [2024-11-19 07:37:35.373495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:26.254 [2024-11-19 07:37:35.373507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.426 ms 00:22:26.254 [2024-11-19 07:37:35.373516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.373747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.254 [2024-11-19 07:37:35.373757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:26.254 [2024-11-19 07:37:35.373765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:22:26.254 [2024-11-19 07:37:35.373772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.412974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.254 [2024-11-19 07:37:35.413176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.254 [2024-11-19 07:37:35.413220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.254 [2024-11-19 07:37:35.413229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.413299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.254 [2024-11-19 07:37:35.413309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.254 [2024-11-19 07:37:35.413318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.254 [2024-11-19 07:37:35.413326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.413399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.254 [2024-11-19 07:37:35.413415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.254 [2024-11-19 07:37:35.413424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.254 [2024-11-19 07:37:35.413435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.413463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.254 [2024-11-19 07:37:35.413472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.254 [2024-11-19 07:37:35.413479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.254 [2024-11-19 07:37:35.413487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.254 [2024-11-19 07:37:35.494872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.254 [2024-11-19 07:37:35.494935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.255 [2024-11-19 07:37:35.494946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.255 [2024-11-19 07:37:35.494954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.526876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.516 [2024-11-19 07:37:35.527103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.516 [2024-11-19 07:37:35.527242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.516 [2024-11-19 07:37:35.527314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.516 [2024-11-19 07:37:35.527451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:26.516 [2024-11-19 07:37:35.527511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.516 [2024-11-19 07:37:35.527579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.516 [2024-11-19 07:37:35.527648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.516 [2024-11-19 07:37:35.527656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.516 [2024-11-19 07:37:35.527664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.516 [2024-11-19 07:37:35.527795] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 712.363 ms, result 0 00:22:27.462 00:22:27.462 00:22:27.462 07:37:36 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.012 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:30.012 07:37:38 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:30.012 07:37:38 -- ftl/restore.sh@85 -- # restore_kill 00:22:30.012 07:37:38 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:30.012 07:37:38 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.012 07:37:38 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:30.012 07:37:38 -- ftl/restore.sh@32 -- # killprocess 72939 00:22:30.012 07:37:38 -- common/autotest_common.sh@936 -- # '[' -z 72939 ']' 00:22:30.012 Process with pid 72939 is not found 00:22:30.012 07:37:38 -- common/autotest_common.sh@940 -- # kill -0 72939 00:22:30.012 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72939) - No such process 00:22:30.012 07:37:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72939 is not found' 00:22:30.012 Remove shared memory files 00:22:30.012 07:37:38 -- ftl/restore.sh@33 -- # remove_shm 00:22:30.012 07:37:38 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:30.012 07:37:38 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:30.012 07:37:38 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:30.012 07:37:38 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:30.012 07:37:38 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:30.012 07:37:38 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:30.012 ************************************ 00:22:30.012 END TEST ftl_restore 00:22:30.012 ************************************ 00:22:30.012 00:22:30.012 real 4m43.970s 00:22:30.012 user 4m33.075s 00:22:30.013 sys 0m11.221s 00:22:30.013 07:37:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:30.013 07:37:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.013 07:37:38 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:30.013 07:37:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:30.013 07:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.013 07:37:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.013 ************************************ 00:22:30.013 START TEST ftl_dirty_shutdown 00:22:30.013 ************************************ 00:22:30.013 07:37:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:30.013 * Looking for test storage... 00:22:30.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.013 07:37:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:30.013 07:37:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:30.013 07:37:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:30.013 07:37:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:30.013 07:37:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:30.013 07:37:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:30.013 07:37:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:30.013 07:37:39 -- scripts/common.sh@335 -- # IFS=.-: 00:22:30.013 07:37:39 -- scripts/common.sh@335 -- # read -ra ver1 00:22:30.013 07:37:39 -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.013 07:37:39 -- scripts/common.sh@336 -- # read -ra ver2 00:22:30.013 07:37:39 -- scripts/common.sh@337 -- # local 'op=<' 00:22:30.013 07:37:39 -- scripts/common.sh@339 -- # ver1_l=2 00:22:30.013 07:37:39 -- scripts/common.sh@340 -- # ver2_l=1 00:22:30.013 07:37:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:30.013 07:37:39 -- scripts/common.sh@343 -- # case "$op" in 00:22:30.013 07:37:39 -- scripts/common.sh@344 -- # : 1 00:22:30.013 07:37:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:30.013 07:37:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.013 07:37:39 -- scripts/common.sh@364 -- # decimal 1 00:22:30.013 07:37:39 -- scripts/common.sh@352 -- # local d=1 00:22:30.013 07:37:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.013 07:37:39 -- scripts/common.sh@354 -- # echo 1 00:22:30.013 07:37:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:30.013 07:37:39 -- scripts/common.sh@365 -- # decimal 2 00:22:30.013 07:37:39 -- scripts/common.sh@352 -- # local d=2 00:22:30.013 07:37:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.013 07:37:39 -- scripts/common.sh@354 -- # echo 2 00:22:30.013 07:37:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:30.013 07:37:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:30.013 07:37:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:30.013 07:37:39 -- scripts/common.sh@367 -- # return 0 00:22:30.013 07:37:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.013 07:37:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.013 --rc genhtml_branch_coverage=1 00:22:30.013 --rc genhtml_function_coverage=1 00:22:30.013 --rc genhtml_legend=1 00:22:30.013 --rc geninfo_all_blocks=1 00:22:30.013 --rc geninfo_unexecuted_blocks=1 00:22:30.013 00:22:30.013 ' 00:22:30.013 07:37:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.013 --rc genhtml_branch_coverage=1 00:22:30.013 --rc genhtml_function_coverage=1 00:22:30.013 --rc genhtml_legend=1 00:22:30.013 --rc geninfo_all_blocks=1 00:22:30.013 --rc geninfo_unexecuted_blocks=1 00:22:30.013 00:22:30.013 ' 00:22:30.013 07:37:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.013 --rc genhtml_branch_coverage=1 00:22:30.013 --rc genhtml_function_coverage=1 00:22:30.013 --rc genhtml_legend=1 00:22:30.013 --rc geninfo_all_blocks=1 00:22:30.013 --rc geninfo_unexecuted_blocks=1 00:22:30.013 00:22:30.013 ' 00:22:30.013 07:37:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:30.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.013 --rc genhtml_branch_coverage=1 00:22:30.013 --rc genhtml_function_coverage=1 00:22:30.013 --rc genhtml_legend=1 00:22:30.013 --rc geninfo_all_blocks=1 00:22:30.013 --rc geninfo_unexecuted_blocks=1 00:22:30.013 00:22:30.013 ' 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:30.013 07:37:39 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:30.013 07:37:39 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.013 07:37:39 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.013 07:37:39 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:30.013 07:37:39 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:30.013 07:37:39 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.013 07:37:39 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:30.013 07:37:39 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:30.013 07:37:39 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.013 07:37:39 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.013 07:37:39 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:30.013 07:37:39 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:30.013 07:37:39 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:30.013 07:37:39 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:30.013 07:37:39 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:30.013 07:37:39 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:30.013 07:37:39 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.013 07:37:39 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.013 07:37:39 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:30.013 07:37:39 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:30.013 07:37:39 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:30.013 07:37:39 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:30.013 07:37:39 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:30.013 07:37:39 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:30.013 07:37:39 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:30.013 07:37:39 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:30.013 07:37:39 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.013 07:37:39 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76011 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76011 00:22:30.013 07:37:39 -- common/autotest_common.sh@829 -- # '[' -z 76011 ']' 00:22:30.013 07:37:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.013 07:37:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.013 07:37:39 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:30.014 07:37:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.014 07:37:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.014 07:37:39 -- common/autotest_common.sh@10 -- # set +x 00:22:30.014 [2024-11-19 07:37:39.133680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.014 [2024-11-19 07:37:39.133991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76011 ] 00:22:30.274 [2024-11-19 07:37:39.285017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.274 [2024-11-19 07:37:39.505906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.274 [2024-11-19 07:37:39.506391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.662 07:37:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.662 07:37:40 -- common/autotest_common.sh@862 -- # return 0 00:22:31.662 07:37:40 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:31.662 07:37:40 -- ftl/common.sh@54 -- # local name=nvme0 00:22:31.662 07:37:40 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:31.662 07:37:40 -- ftl/common.sh@56 -- # local size=103424 00:22:31.662 07:37:40 -- ftl/common.sh@59 -- # local base_bdev 00:22:31.662 07:37:40 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:31.924 07:37:40 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:31.924 07:37:40 -- ftl/common.sh@62 -- # local base_size 00:22:31.924 07:37:40 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:31.924 07:37:40 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:31.924 07:37:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:31.924 07:37:40 -- common/autotest_common.sh@1369 -- # local bs 00:22:31.924 07:37:40 -- common/autotest_common.sh@1370 -- # local nb 00:22:31.924 07:37:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:32.186 07:37:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:32.186 { 00:22:32.186 "name": "nvme0n1", 00:22:32.186 "aliases": [ 00:22:32.186 "c8467f83-1403-4410-8823-8a9ec48906c8" 00:22:32.186 ], 00:22:32.186 "product_name": "NVMe disk", 00:22:32.186 "block_size": 4096, 00:22:32.186 "num_blocks": 1310720, 00:22:32.186 "uuid": "c8467f83-1403-4410-8823-8a9ec48906c8", 00:22:32.186 "assigned_rate_limits": { 00:22:32.186 "rw_ios_per_sec": 0, 00:22:32.186 "rw_mbytes_per_sec": 0, 00:22:32.186 "r_mbytes_per_sec": 0, 00:22:32.186 "w_mbytes_per_sec": 0 00:22:32.186 }, 00:22:32.186 "claimed": true, 00:22:32.186 "claim_type": "read_many_write_one", 00:22:32.186 "zoned": false, 00:22:32.186 "supported_io_types": { 00:22:32.186 "read": true, 00:22:32.186 "write": true, 00:22:32.186 "unmap": true, 00:22:32.186 "write_zeroes": true, 00:22:32.186 "flush": true, 00:22:32.186 "reset": true, 00:22:32.186 "compare": true, 00:22:32.186 "compare_and_write": false, 00:22:32.186 "abort": true, 00:22:32.186 "nvme_admin": true, 00:22:32.186 "nvme_io": true 00:22:32.186 }, 00:22:32.186 "driver_specific": { 00:22:32.186 "nvme": [ 00:22:32.186 { 00:22:32.186 "pci_address": "0000:00:07.0", 00:22:32.186 "trid": { 00:22:32.186 "trtype": "PCIe", 00:22:32.186 "traddr": "0000:00:07.0" 00:22:32.186 }, 00:22:32.186 "ctrlr_data": { 00:22:32.186 "cntlid": 0, 00:22:32.186 "vendor_id": "0x1b36", 00:22:32.186 "model_number": "QEMU NVMe Ctrl", 00:22:32.186 "serial_number": "12341", 00:22:32.186 "firmware_revision": "8.0.0", 00:22:32.186 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:32.186 "oacs": { 00:22:32.186 "security": 0, 00:22:32.186 "format": 1, 00:22:32.186 "firmware": 0, 00:22:32.186 "ns_manage": 1 00:22:32.186 }, 00:22:32.186 "multi_ctrlr": false, 00:22:32.186 "ana_reporting": false 00:22:32.186 }, 00:22:32.186 "vs": { 00:22:32.186 "nvme_version": "1.4" 00:22:32.186 }, 00:22:32.186 "ns_data": { 00:22:32.186 "id": 1, 00:22:32.186 "can_share": false 00:22:32.186 } 00:22:32.186 } 00:22:32.186 ], 00:22:32.186 "mp_policy": "active_passive" 00:22:32.186 } 00:22:32.186 } 00:22:32.186 ]' 00:22:32.186 07:37:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:32.186 07:37:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:32.186 07:37:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:32.186 07:37:41 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:32.186 07:37:41 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:32.186 07:37:41 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:32.186 07:37:41 -- ftl/common.sh@63 -- # base_size=5120 00:22:32.186 07:37:41 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:32.186 07:37:41 -- ftl/common.sh@67 -- # clear_lvols 00:22:32.186 07:37:41 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:32.186 07:37:41 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:32.447 07:37:41 -- ftl/common.sh@28 -- # stores=685f1ed7-f614-4605-a0e7-1d55726cacb3 00:22:32.447 07:37:41 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:32.447 07:37:41 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 685f1ed7-f614-4605-a0e7-1d55726cacb3 00:22:32.447 07:37:41 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:32.708 07:37:41 -- ftl/common.sh@68 -- # lvs=7c59f9be-b53d-41c9-bda1-595bf6bae969 00:22:32.708 07:37:41 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7c59f9be-b53d-41c9-bda1-595bf6bae969 00:22:32.977 07:37:42 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=71297df2-141a-4946-b43e-483dc533929a 00:22:32.977 07:37:42 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:32.977 07:37:42 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 71297df2-141a-4946-b43e-483dc533929a 00:22:32.977 07:37:42 -- ftl/common.sh@35 -- # local name=nvc0 00:22:32.977 07:37:42 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:32.977 07:37:42 -- ftl/common.sh@37 -- # local base_bdev=71297df2-141a-4946-b43e-483dc533929a 00:22:32.977 07:37:42 -- ftl/common.sh@38 -- # local cache_size= 00:22:32.977 07:37:42 -- ftl/common.sh@41 -- # get_bdev_size 71297df2-141a-4946-b43e-483dc533929a 00:22:32.977 07:37:42 -- common/autotest_common.sh@1367 -- # local bdev_name=71297df2-141a-4946-b43e-483dc533929a 00:22:32.977 07:37:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:32.977 07:37:42 -- common/autotest_common.sh@1369 -- # local bs 00:22:32.977 07:37:42 -- common/autotest_common.sh@1370 -- # local nb 00:22:32.977 07:37:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71297df2-141a-4946-b43e-483dc533929a 00:22:33.238 07:37:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:33.238 { 00:22:33.238 "name": "71297df2-141a-4946-b43e-483dc533929a", 00:22:33.238 "aliases": [ 00:22:33.238 "lvs/nvme0n1p0" 00:22:33.238 ], 00:22:33.238 "product_name": "Logical Volume", 00:22:33.238 "block_size": 4096, 00:22:33.238 "num_blocks": 26476544, 00:22:33.238 "uuid": "71297df2-141a-4946-b43e-483dc533929a", 00:22:33.238 "assigned_rate_limits": { 00:22:33.238 "rw_ios_per_sec": 0, 00:22:33.238 "rw_mbytes_per_sec": 0, 00:22:33.238 "r_mbytes_per_sec": 0, 00:22:33.238 "w_mbytes_per_sec": 0 00:22:33.238 }, 00:22:33.238 "claimed": false, 00:22:33.238 "zoned": false, 00:22:33.238 "supported_io_types": { 00:22:33.238 "read": true, 00:22:33.238 "write": true, 00:22:33.238 "unmap": true, 00:22:33.238 "write_zeroes": true, 00:22:33.238 "flush": false, 00:22:33.238 "reset": true, 00:22:33.238 "compare": false, 00:22:33.238 "compare_and_write": false, 00:22:33.238 "abort": false, 00:22:33.238 "nvme_admin": false, 00:22:33.238 "nvme_io": false 00:22:33.238 }, 00:22:33.238 "driver_specific": { 00:22:33.238 "lvol": { 00:22:33.238 "lvol_store_uuid": "7c59f9be-b53d-41c9-bda1-595bf6bae969", 00:22:33.238 "base_bdev": "nvme0n1", 00:22:33.238 "thin_provision": true, 00:22:33.238 "snapshot": false, 00:22:33.238 "clone": false, 00:22:33.238 "esnap_clone": false 00:22:33.238 } 00:22:33.238 } 00:22:33.238 } 00:22:33.238 ]' 00:22:33.238 07:37:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:33.238 07:37:42 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:33.238 07:37:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:33.238 07:37:42 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:33.238 07:37:42 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:33.238 07:37:42 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:33.238 07:37:42 -- ftl/common.sh@41 -- # local base_size=5171 00:22:33.238 07:37:42 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:33.238 07:37:42 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:33.499 07:37:42 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:33.499 07:37:42 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:33.499 07:37:42 -- ftl/common.sh@48 -- # get_bdev_size 71297df2-141a-4946-b43e-483dc533929a 00:22:33.499 07:37:42 -- common/autotest_common.sh@1367 -- # local bdev_name=71297df2-141a-4946-b43e-483dc533929a 00:22:33.499 07:37:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:33.499 07:37:42 -- common/autotest_common.sh@1369 -- # local bs 00:22:33.499 07:37:42 -- common/autotest_common.sh@1370 -- # local nb 00:22:33.499 07:37:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71297df2-141a-4946-b43e-483dc533929a 00:22:33.761 07:37:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:33.761 { 00:22:33.761 "name": "71297df2-141a-4946-b43e-483dc533929a", 00:22:33.761 "aliases": [ 00:22:33.761 "lvs/nvme0n1p0" 00:22:33.761 ], 00:22:33.761 "product_name": "Logical Volume", 00:22:33.761 "block_size": 4096, 00:22:33.761 "num_blocks": 26476544, 00:22:33.761 "uuid": "71297df2-141a-4946-b43e-483dc533929a", 00:22:33.761 "assigned_rate_limits": { 00:22:33.761 "rw_ios_per_sec": 0, 00:22:33.761 "rw_mbytes_per_sec": 0, 00:22:33.761 "r_mbytes_per_sec": 0, 00:22:33.761 "w_mbytes_per_sec": 0 00:22:33.761 }, 00:22:33.761 "claimed": false, 00:22:33.761 "zoned": false, 00:22:33.761 "supported_io_types": { 00:22:33.761 "read": true, 00:22:33.761 "write": true, 00:22:33.761 "unmap": true, 00:22:33.761 "write_zeroes": true, 00:22:33.761 "flush": false, 00:22:33.761 "reset": true, 00:22:33.761 "compare": false, 00:22:33.761 "compare_and_write": false, 00:22:33.761 "abort": false, 00:22:33.761 "nvme_admin": false, 00:22:33.761 "nvme_io": false 00:22:33.761 }, 00:22:33.761 "driver_specific": { 00:22:33.761 "lvol": { 00:22:33.761 "lvol_store_uuid": "7c59f9be-b53d-41c9-bda1-595bf6bae969", 00:22:33.761 "base_bdev": "nvme0n1", 00:22:33.761 "thin_provision": true, 00:22:33.761 "snapshot": false, 00:22:33.761 "clone": false, 00:22:33.761 "esnap_clone": false 00:22:33.761 } 00:22:33.761 } 00:22:33.761 } 00:22:33.761 ]' 00:22:33.761 07:37:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:33.761 07:37:42 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:33.761 07:37:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:33.761 07:37:42 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:33.761 07:37:42 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:33.761 07:37:42 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:33.761 07:37:42 -- ftl/common.sh@48 -- # cache_size=5171 00:22:33.761 07:37:42 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:34.021 07:37:43 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:34.021 07:37:43 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 71297df2-141a-4946-b43e-483dc533929a 00:22:34.021 07:37:43 -- common/autotest_common.sh@1367 -- # local bdev_name=71297df2-141a-4946-b43e-483dc533929a 00:22:34.021 07:37:43 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:34.021 07:37:43 -- common/autotest_common.sh@1369 -- # local bs 00:22:34.021 07:37:43 -- common/autotest_common.sh@1370 -- # local nb 00:22:34.021 07:37:43 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71297df2-141a-4946-b43e-483dc533929a 00:22:34.283 07:37:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:34.283 { 00:22:34.283 "name": "71297df2-141a-4946-b43e-483dc533929a", 00:22:34.283 "aliases": [ 00:22:34.283 "lvs/nvme0n1p0" 00:22:34.283 ], 00:22:34.283 "product_name": "Logical Volume", 00:22:34.283 "block_size": 4096, 00:22:34.283 "num_blocks": 26476544, 00:22:34.283 "uuid": "71297df2-141a-4946-b43e-483dc533929a", 00:22:34.283 "assigned_rate_limits": { 00:22:34.283 "rw_ios_per_sec": 0, 00:22:34.283 "rw_mbytes_per_sec": 0, 00:22:34.283 "r_mbytes_per_sec": 0, 00:22:34.283 "w_mbytes_per_sec": 0 00:22:34.283 }, 00:22:34.283 "claimed": false, 00:22:34.283 "zoned": false, 00:22:34.283 "supported_io_types": { 00:22:34.283 "read": true, 00:22:34.283 "write": true, 00:22:34.283 "unmap": true, 00:22:34.283 "write_zeroes": true, 00:22:34.283 "flush": false, 00:22:34.283 "reset": true, 00:22:34.283 "compare": false, 00:22:34.283 "compare_and_write": false, 00:22:34.283 "abort": false, 00:22:34.283 "nvme_admin": false, 00:22:34.283 "nvme_io": false 00:22:34.283 }, 00:22:34.283 "driver_specific": { 00:22:34.283 "lvol": { 00:22:34.283 "lvol_store_uuid": "7c59f9be-b53d-41c9-bda1-595bf6bae969", 00:22:34.283 "base_bdev": "nvme0n1", 00:22:34.283 "thin_provision": true, 00:22:34.283 "snapshot": false, 00:22:34.283 "clone": false, 00:22:34.283 "esnap_clone": false 00:22:34.283 } 00:22:34.283 } 00:22:34.283 } 00:22:34.283 ]' 00:22:34.283 07:37:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:34.283 07:37:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:34.283 07:37:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:34.283 07:37:43 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:34.283 07:37:43 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:34.283 07:37:43 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 71297df2-141a-4946-b43e-483dc533929a --l2p_dram_limit 10' 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:34.284 07:37:43 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 71297df2-141a-4946-b43e-483dc533929a --l2p_dram_limit 10 -c nvc0n1p0 00:22:34.284 [2024-11-19 07:37:43.501941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.501980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:34.284 [2024-11-19 07:37:43.501993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:34.284 [2024-11-19 07:37:43.502001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.502043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.502050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.284 [2024-11-19 07:37:43.502059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:34.284 [2024-11-19 07:37:43.502065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.502081] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:34.284 [2024-11-19 07:37:43.502703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:34.284 [2024-11-19 07:37:43.502723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.502729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.284 [2024-11-19 07:37:43.502736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:22:34.284 [2024-11-19 07:37:43.502742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.502809] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 05c4e17a-a781-4bc8-81e7-361c5c102370 00:22:34.284 [2024-11-19 07:37:43.503751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.503770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:34.284 [2024-11-19 07:37:43.503777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:34.284 [2024-11-19 07:37:43.503784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.508501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.508529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.284 [2024-11-19 07:37:43.508536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:22:34.284 [2024-11-19 07:37:43.508543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.508608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.508617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.284 [2024-11-19 07:37:43.508623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:34.284 [2024-11-19 07:37:43.508632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.508670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.508681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:34.284 [2024-11-19 07:37:43.508687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:34.284 [2024-11-19 07:37:43.508693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.508711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.284 [2024-11-19 07:37:43.511695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.511720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.284 [2024-11-19 07:37:43.511729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:22:34.284 [2024-11-19 07:37:43.511735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.511765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.511771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:34.284 [2024-11-19 07:37:43.511779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:34.284 [2024-11-19 07:37:43.511784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.511798] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:34.284 [2024-11-19 07:37:43.511961] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:34.284 [2024-11-19 07:37:43.511972] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:34.284 [2024-11-19 07:37:43.511980] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:34.284 [2024-11-19 07:37:43.511989] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:34.284 [2024-11-19 07:37:43.511996] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512004] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:34.284 [2024-11-19 07:37:43.512015] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:34.284 [2024-11-19 07:37:43.512022] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:34.284 [2024-11-19 07:37:43.512027] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:34.284 [2024-11-19 07:37:43.512035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.512041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:34.284 [2024-11-19 07:37:43.512048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:22:34.284 [2024-11-19 07:37:43.512053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.512101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.284 [2024-11-19 07:37:43.512107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:34.284 [2024-11-19 07:37:43.512114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:34.284 [2024-11-19 07:37:43.512120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.284 [2024-11-19 07:37:43.512191] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:34.284 [2024-11-19 07:37:43.512199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:34.284 [2024-11-19 07:37:43.512207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:34.284 [2024-11-19 07:37:43.512225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:34.284 [2024-11-19 07:37:43.512242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.284 [2024-11-19 07:37:43.512254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:34.284 [2024-11-19 07:37:43.512259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:34.284 [2024-11-19 07:37:43.512267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.284 [2024-11-19 07:37:43.512272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:34.284 [2024-11-19 07:37:43.512278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:34.284 [2024-11-19 07:37:43.512284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:34.284 [2024-11-19 07:37:43.512296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:34.284 [2024-11-19 07:37:43.512302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:34.284 [2024-11-19 07:37:43.512314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:34.284 [2024-11-19 07:37:43.512319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:34.284 [2024-11-19 07:37:43.512329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:34.284 [2024-11-19 07:37:43.512346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:34.284 [2024-11-19 07:37:43.512362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:34.284 [2024-11-19 07:37:43.512379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.284 [2024-11-19 07:37:43.512390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:34.284 [2024-11-19 07:37:43.512395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:34.284 [2024-11-19 07:37:43.512401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.284 [2024-11-19 07:37:43.512405] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:34.284 [2024-11-19 07:37:43.512412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:34.284 [2024-11-19 07:37:43.512417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.284 [2024-11-19 07:37:43.512422] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:34.284 [2024-11-19 07:37:43.512428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:34.285 [2024-11-19 07:37:43.512434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.285 [2024-11-19 07:37:43.512439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.285 [2024-11-19 07:37:43.512448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:34.285 [2024-11-19 07:37:43.512453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:34.285 [2024-11-19 07:37:43.512458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:34.285 [2024-11-19 07:37:43.512464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:34.285 [2024-11-19 07:37:43.512471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:34.285 [2024-11-19 07:37:43.512476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:34.285 [2024-11-19 07:37:43.512483] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:34.285 [2024-11-19 07:37:43.512491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.285 [2024-11-19 07:37:43.512499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:34.285 [2024-11-19 07:37:43.512504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:34.285 [2024-11-19 07:37:43.512510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:34.285 [2024-11-19 07:37:43.512516] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:34.285 [2024-11-19 07:37:43.512522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:34.285 [2024-11-19 07:37:43.512527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:34.285 [2024-11-19 07:37:43.512534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:34.285 [2024-11-19 07:37:43.512539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:34.285 [2024-11-19 07:37:43.512545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:34.285 [2024-11-19 07:37:43.512550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:34.285 [2024-11-19 07:37:43.512556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:34.285 [2024-11-19 07:37:43.512561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:34.285 [2024-11-19 07:37:43.512570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:34.285 [2024-11-19 07:37:43.512575] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:34.285 [2024-11-19 07:37:43.512583] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.285 [2024-11-19 07:37:43.512589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:34.285 [2024-11-19 07:37:43.512596] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:34.285 [2024-11-19 07:37:43.512601] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:34.285 [2024-11-19 07:37:43.512608] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:34.285 [2024-11-19 07:37:43.512613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.285 [2024-11-19 07:37:43.512620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:34.285 [2024-11-19 07:37:43.512625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:22:34.285 [2024-11-19 07:37:43.512632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.285 [2024-11-19 07:37:43.524468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.285 [2024-11-19 07:37:43.524500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.285 [2024-11-19 07:37:43.524508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.794 ms 00:22:34.285 [2024-11-19 07:37:43.524515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.285 [2024-11-19 07:37:43.524583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.285 [2024-11-19 07:37:43.524591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:34.285 [2024-11-19 07:37:43.524599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:34.285 [2024-11-19 07:37:43.524605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.548500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.548529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.547 [2024-11-19 07:37:43.548537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.862 ms 00:22:34.547 [2024-11-19 07:37:43.548545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.548568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.548576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.547 [2024-11-19 07:37:43.548583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:34.547 [2024-11-19 07:37:43.548591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.548883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.548898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.547 [2024-11-19 07:37:43.548905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:22:34.547 [2024-11-19 07:37:43.548911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.548995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.549005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.547 [2024-11-19 07:37:43.549011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:34.547 [2024-11-19 07:37:43.549018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.560933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.561050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.547 [2024-11-19 07:37:43.561063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.900 ms 00:22:34.547 [2024-11-19 07:37:43.561070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.570111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:34.547 [2024-11-19 07:37:43.572386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.572408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:34.547 [2024-11-19 07:37:43.572418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.257 ms 00:22:34.547 [2024-11-19 07:37:43.572425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.634558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.547 [2024-11-19 07:37:43.634603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:34.547 [2024-11-19 07:37:43.634621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.108 ms 00:22:34.547 [2024-11-19 07:37:43.634630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.547 [2024-11-19 07:37:43.634671] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:34.547 [2024-11-19 07:37:43.634683] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:38.758 [2024-11-19 07:37:47.438809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.758 [2024-11-19 07:37:47.438886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:38.758 [2024-11-19 07:37:47.438910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3804.114 ms 00:22:38.758 [2024-11-19 07:37:47.438920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.758 [2024-11-19 07:37:47.439147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.758 [2024-11-19 07:37:47.439160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:38.758 [2024-11-19 07:37:47.439177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:22:38.758 [2024-11-19 07:37:47.439211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.758 [2024-11-19 07:37:47.466670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.758 [2024-11-19 07:37:47.466725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:38.758 [2024-11-19 07:37:47.466743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.396 ms 00:22:38.759 [2024-11-19 07:37:47.466751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.492987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.493225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:38.759 [2024-11-19 07:37:47.493261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.173 ms 00:22:38.759 [2024-11-19 07:37:47.493269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.493642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.493655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:38.759 [2024-11-19 07:37:47.493666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:22:38.759 [2024-11-19 07:37:47.493673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.567244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.567301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:38.759 [2024-11-19 07:37:47.567318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.503 ms 00:22:38.759 [2024-11-19 07:37:47.567326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.596128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.596210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:38.759 [2024-11-19 07:37:47.596228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.763 ms 00:22:38.759 [2024-11-19 07:37:47.596236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.598025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.598074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:38.759 [2024-11-19 07:37:47.598090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:22:38.759 [2024-11-19 07:37:47.598098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.625428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.625488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:38.759 [2024-11-19 07:37:47.625506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.261 ms 00:22:38.759 [2024-11-19 07:37:47.625513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.625555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.625564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:38.759 [2024-11-19 07:37:47.625575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:38.759 [2024-11-19 07:37:47.625582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.625682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.759 [2024-11-19 07:37:47.625692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:38.759 [2024-11-19 07:37:47.625703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:38.759 [2024-11-19 07:37:47.625710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.759 [2024-11-19 07:37:47.626878] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4124.424 ms, result 0 00:22:38.759 { 00:22:38.759 "name": "ftl0", 00:22:38.759 "uuid": "05c4e17a-a781-4bc8-81e7-361c5c102370" 00:22:38.759 } 00:22:38.759 07:37:47 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:38.759 07:37:47 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:38.759 07:37:47 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:38.759 07:37:47 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:38.759 07:37:47 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:39.020 /dev/nbd0 00:22:39.020 07:37:48 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:39.020 07:37:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:39.020 07:37:48 -- common/autotest_common.sh@867 -- # local i 00:22:39.020 07:37:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:39.020 07:37:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:39.020 07:37:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:39.020 07:37:48 -- common/autotest_common.sh@871 -- # break 00:22:39.020 07:37:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:39.020 07:37:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:39.020 07:37:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:39.020 1+0 records in 00:22:39.021 1+0 records out 00:22:39.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736427 s, 5.6 MB/s 00:22:39.021 07:37:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:39.021 07:37:48 -- common/autotest_common.sh@884 -- # size=4096 00:22:39.021 07:37:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:39.021 07:37:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:39.021 07:37:48 -- common/autotest_common.sh@887 -- # return 0 00:22:39.021 07:37:48 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:39.021 [2024-11-19 07:37:48.161938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:39.021 [2024-11-19 07:37:48.162073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:22:39.282 [2024-11-19 07:37:48.310853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.544 [2024-11-19 07:37:48.542154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.931  [2024-11-19T07:37:51.125Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-19T07:37:52.068Z] Copying: 432/1024 [MB] (242 MBps) [2024-11-19T07:37:53.007Z] Copying: 692/1024 [MB] (259 MBps) [2024-11-19T07:37:53.269Z] Copying: 950/1024 [MB] (257 MBps) [2024-11-19T07:37:53.841Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:22:44.591 00:22:44.591 07:37:53 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:46.507 07:37:55 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:46.507 [2024-11-19 07:37:55.742994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.507 [2024-11-19 07:37:55.743189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76255 ] 00:22:46.768 [2024-11-19 07:37:55.884012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.768 [2024-11-19 07:37:56.021240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.183  [2024-11-19T07:37:58.399Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-19T07:37:59.342Z] Copying: 60/1024 [MB] (31 MBps) [2024-11-19T07:38:00.288Z] Copying: 90/1024 [MB] (30 MBps) [2024-11-19T07:38:01.232Z] Copying: 120/1024 [MB] (29 MBps) [2024-11-19T07:38:02.619Z] Copying: 152/1024 [MB] (32 MBps) [2024-11-19T07:38:03.192Z] Copying: 177/1024 [MB] (25 MBps) [2024-11-19T07:38:04.576Z] Copying: 197/1024 [MB] (20 MBps) [2024-11-19T07:38:05.517Z] Copying: 217/1024 [MB] (19 MBps) [2024-11-19T07:38:06.457Z] Copying: 240/1024 [MB] (23 MBps) [2024-11-19T07:38:07.396Z] Copying: 269/1024 [MB] (28 MBps) [2024-11-19T07:38:08.338Z] Copying: 298/1024 [MB] (29 MBps) [2024-11-19T07:38:09.281Z] Copying: 324/1024 [MB] (26 MBps) [2024-11-19T07:38:10.224Z] Copying: 353/1024 [MB] (28 MBps) [2024-11-19T07:38:11.609Z] Copying: 379/1024 [MB] (25 MBps) [2024-11-19T07:38:12.551Z] Copying: 410/1024 [MB] (30 MBps) [2024-11-19T07:38:13.513Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-19T07:38:14.457Z] Copying: 442/1024 [MB] (15 MBps) [2024-11-19T07:38:15.401Z] Copying: 455/1024 [MB] (12 MBps) [2024-11-19T07:38:16.345Z] Copying: 468/1024 [MB] (13 MBps) [2024-11-19T07:38:17.283Z] Copying: 493/1024 [MB] (24 MBps) [2024-11-19T07:38:18.219Z] Copying: 524/1024 [MB] (31 MBps) [2024-11-19T07:38:19.597Z] Copying: 552/1024 [MB] (28 MBps) [2024-11-19T07:38:20.541Z] Copying: 582/1024 [MB] (29 MBps) [2024-11-19T07:38:21.485Z] Copying: 610/1024 [MB] (27 MBps) [2024-11-19T07:38:22.431Z] Copying: 632/1024 [MB] (22 MBps) [2024-11-19T07:38:23.376Z] Copying: 652/1024 [MB] (19 MBps) [2024-11-19T07:38:24.321Z] Copying: 664/1024 [MB] (12 MBps) [2024-11-19T07:38:25.265Z] Copying: 678/1024 [MB] (13 MBps) [2024-11-19T07:38:26.207Z] Copying: 701/1024 [MB] (22 MBps) [2024-11-19T07:38:27.595Z] Copying: 733/1024 [MB] (32 MBps) [2024-11-19T07:38:28.537Z] Copying: 768/1024 [MB] (34 MBps) [2024-11-19T07:38:29.476Z] Copying: 795/1024 [MB] (26 MBps) [2024-11-19T07:38:30.416Z] Copying: 828/1024 [MB] (33 MBps) [2024-11-19T07:38:31.356Z] Copying: 864/1024 [MB] (35 MBps) [2024-11-19T07:38:32.296Z] Copying: 895/1024 [MB] (31 MBps) [2024-11-19T07:38:33.237Z] Copying: 925/1024 [MB] (30 MBps) [2024-11-19T07:38:34.249Z] Copying: 957/1024 [MB] (31 MBps) [2024-11-19T07:38:35.192Z] Copying: 982/1024 [MB] (24 MBps) [2024-11-19T07:38:35.764Z] Copying: 1006/1024 [MB] (23 MBps) [2024-11-19T07:38:36.706Z] Copying: 1024/1024 [MB] (average 25 MBps) 00:23:27.456 00:23:27.456 07:38:36 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:27.456 07:38:36 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:27.456 07:38:36 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:27.718 [2024-11-19 07:38:36.761150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.761327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:27.718 [2024-11-19 07:38:36.761347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:27.718 [2024-11-19 07:38:36.761356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.761381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:27.718 [2024-11-19 07:38:36.763514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.763542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:27.718 [2024-11-19 07:38:36.763552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:23:27.718 [2024-11-19 07:38:36.763558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.765305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.765331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:27.718 [2024-11-19 07:38:36.765344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:23:27.718 [2024-11-19 07:38:36.765351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.779227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.779254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:27.718 [2024-11-19 07:38:36.779265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.859 ms 00:23:27.718 [2024-11-19 07:38:36.779272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.784028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.784052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:27.718 [2024-11-19 07:38:36.784063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:23:27.718 [2024-11-19 07:38:36.784071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.802783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.802811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:27.718 [2024-11-19 07:38:36.802821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.659 ms 00:23:27.718 [2024-11-19 07:38:36.802827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.815300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.815413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:27.718 [2024-11-19 07:38:36.815431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.441 ms 00:23:27.718 [2024-11-19 07:38:36.815437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.815549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.815557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:27.718 [2024-11-19 07:38:36.815565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:27.718 [2024-11-19 07:38:36.815571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.833460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.833490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:27.718 [2024-11-19 07:38:36.833499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.872 ms 00:23:27.718 [2024-11-19 07:38:36.833505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.851456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.851482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:27.718 [2024-11-19 07:38:36.851491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.920 ms 00:23:27.718 [2024-11-19 07:38:36.851496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.869292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.869317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:27.718 [2024-11-19 07:38:36.869326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.764 ms 00:23:27.718 [2024-11-19 07:38:36.869332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.886489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.718 [2024-11-19 07:38:36.886585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:27.718 [2024-11-19 07:38:36.886613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.098 ms 00:23:27.718 [2024-11-19 07:38:36.886618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.718 [2024-11-19 07:38:36.886646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:27.718 [2024-11-19 07:38:36.886657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:27.718 [2024-11-19 07:38:36.886847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.886994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:27.719 [2024-11-19 07:38:36.887329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:27.719 [2024-11-19 07:38:36.887335] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05c4e17a-a781-4bc8-81e7-361c5c102370 00:23:27.719 [2024-11-19 07:38:36.887343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:27.719 [2024-11-19 07:38:36.887350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:27.719 [2024-11-19 07:38:36.887355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:27.719 [2024-11-19 07:38:36.887362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:27.719 [2024-11-19 07:38:36.887367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:27.719 [2024-11-19 07:38:36.887374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:27.719 [2024-11-19 07:38:36.887380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:27.719 [2024-11-19 07:38:36.887386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:27.719 [2024-11-19 07:38:36.887390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:27.719 [2024-11-19 07:38:36.887398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.719 [2024-11-19 07:38:36.887406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:27.719 [2024-11-19 07:38:36.887414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:23:27.719 [2024-11-19 07:38:36.887419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.719 [2024-11-19 07:38:36.897164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.719 [2024-11-19 07:38:36.897198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:27.719 [2024-11-19 07:38:36.897208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.718 ms 00:23:27.719 [2024-11-19 07:38:36.897214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.719 [2024-11-19 07:38:36.897367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.719 [2024-11-19 07:38:36.897373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:27.719 [2024-11-19 07:38:36.897380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:27.719 [2024-11-19 07:38:36.897385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.720 [2024-11-19 07:38:36.932481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.720 [2024-11-19 07:38:36.932509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.720 [2024-11-19 07:38:36.932518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.720 [2024-11-19 07:38:36.932524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.720 [2024-11-19 07:38:36.932573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.720 [2024-11-19 07:38:36.932579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.720 [2024-11-19 07:38:36.932586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.720 [2024-11-19 07:38:36.932591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.720 [2024-11-19 07:38:36.932644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.720 [2024-11-19 07:38:36.932651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.720 [2024-11-19 07:38:36.932658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.720 [2024-11-19 07:38:36.932664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.720 [2024-11-19 07:38:36.932678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.720 [2024-11-19 07:38:36.932684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.720 [2024-11-19 07:38:36.932691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.720 [2024-11-19 07:38:36.932697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:36.991354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:36.991390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.981 [2024-11-19 07:38:36.991400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:36.991407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.981 [2024-11-19 07:38:37.013667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:27.981 [2024-11-19 07:38:37.013739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:27.981 [2024-11-19 07:38:37.013792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:27.981 [2024-11-19 07:38:37.013884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:27.981 [2024-11-19 07:38:37.013929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.013963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.013970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.981 [2024-11-19 07:38:37.013977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.013982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.014016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:27.981 [2024-11-19 07:38:37.014023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.981 [2024-11-19 07:38:37.014030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:27.981 [2024-11-19 07:38:37.014036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.981 [2024-11-19 07:38:37.014135] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 252.958 ms, result 0 00:23:27.981 true 00:23:27.981 07:38:37 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76011 00:23:27.981 07:38:37 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76011 00:23:27.981 07:38:37 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:27.981 [2024-11-19 07:38:37.090131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.981 [2024-11-19 07:38:37.090347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76686 ] 00:23:28.242 [2024-11-19 07:38:37.234237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.242 [2024-11-19 07:38:37.449914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.630  [2024-11-19T07:38:39.820Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-19T07:38:40.764Z] Copying: 428/1024 [MB] (240 MBps) [2024-11-19T07:38:42.141Z] Copying: 681/1024 [MB] (253 MBps) [2024-11-19T07:38:42.141Z] Copying: 940/1024 [MB] (258 MBps) [2024-11-19T07:38:42.736Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:23:33.486 00:23:33.486 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76011 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:33.486 07:38:42 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:33.745 [2024-11-19 07:38:42.740157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.745 [2024-11-19 07:38:42.740415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76750 ] 00:23:33.745 [2024-11-19 07:38:42.888034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.003 [2024-11-19 07:38:43.027284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.003 [2024-11-19 07:38:43.233684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.004 [2024-11-19 07:38:43.233730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.263 [2024-11-19 07:38:43.293254] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:34.263 [2024-11-19 07:38:43.293540] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:34.263 [2024-11-19 07:38:43.293744] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:34.263 [2024-11-19 07:38:43.466319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.466456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.263 [2024-11-19 07:38:43.466472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:34.263 [2024-11-19 07:38:43.466478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.466518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.466526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.263 [2024-11-19 07:38:43.466535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:34.263 [2024-11-19 07:38:43.466541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.466557] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.263 [2024-11-19 07:38:43.467098] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.263 [2024-11-19 07:38:43.467114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.467120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.263 [2024-11-19 07:38:43.467127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:23:34.263 [2024-11-19 07:38:43.467132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.468130] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.263 [2024-11-19 07:38:43.477861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.477975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.263 [2024-11-19 07:38:43.477989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.733 ms 00:23:34.263 [2024-11-19 07:38:43.477995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.478034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.478043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.263 [2024-11-19 07:38:43.478049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:34.263 [2024-11-19 07:38:43.478054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.482422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.482447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.263 [2024-11-19 07:38:43.482454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:23:34.263 [2024-11-19 07:38:43.482459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.482522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.482529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.263 [2024-11-19 07:38:43.482535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:34.263 [2024-11-19 07:38:43.482540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.482575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.482582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.263 [2024-11-19 07:38:43.482588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.263 [2024-11-19 07:38:43.482593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.482610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.263 [2024-11-19 07:38:43.485371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.485393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.263 [2024-11-19 07:38:43.485401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:23:34.263 [2024-11-19 07:38:43.485407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.485436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.263 [2024-11-19 07:38:43.485442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.263 [2024-11-19 07:38:43.485449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:34.263 [2024-11-19 07:38:43.485454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.263 [2024-11-19 07:38:43.485468] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.263 [2024-11-19 07:38:43.485489] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:34.263 [2024-11-19 07:38:43.485515] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.263 [2024-11-19 07:38:43.485528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:34.264 [2024-11-19 07:38:43.485584] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:34.264 [2024-11-19 07:38:43.485591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.264 [2024-11-19 07:38:43.485598] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:34.264 [2024-11-19 07:38:43.485605] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485612] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485618] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:34.264 [2024-11-19 07:38:43.485623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.264 [2024-11-19 07:38:43.485629] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:34.264 [2024-11-19 07:38:43.485634] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:34.264 [2024-11-19 07:38:43.485641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.264 [2024-11-19 07:38:43.485646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.264 [2024-11-19 07:38:43.485652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:23:34.264 [2024-11-19 07:38:43.485657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.264 [2024-11-19 07:38:43.485701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.264 [2024-11-19 07:38:43.485707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.264 [2024-11-19 07:38:43.485713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:34.264 [2024-11-19 07:38:43.485718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.264 [2024-11-19 07:38:43.485770] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.264 [2024-11-19 07:38:43.485777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.264 [2024-11-19 07:38:43.485785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.264 [2024-11-19 07:38:43.485802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.264 [2024-11-19 07:38:43.485819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.264 [2024-11-19 07:38:43.485829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.264 [2024-11-19 07:38:43.485835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:34.264 [2024-11-19 07:38:43.485844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.264 [2024-11-19 07:38:43.485849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.264 [2024-11-19 07:38:43.485853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:34.264 [2024-11-19 07:38:43.485858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.264 [2024-11-19 07:38:43.485868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:34.264 [2024-11-19 07:38:43.485873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:34.264 [2024-11-19 07:38:43.485883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:34.264 [2024-11-19 07:38:43.485888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.264 [2024-11-19 07:38:43.485897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.264 [2024-11-19 07:38:43.485913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.264 [2024-11-19 07:38:43.485927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.264 [2024-11-19 07:38:43.485941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.264 [2024-11-19 07:38:43.485956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:34.264 [2024-11-19 07:38:43.485961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.264 [2024-11-19 07:38:43.485965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.264 [2024-11-19 07:38:43.485970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:34.264 [2024-11-19 07:38:43.485975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.264 [2024-11-19 07:38:43.485979] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.264 [2024-11-19 07:38:43.485985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.264 [2024-11-19 07:38:43.485991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.264 [2024-11-19 07:38:43.485997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.264 [2024-11-19 07:38:43.486003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.264 [2024-11-19 07:38:43.486007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.264 [2024-11-19 07:38:43.486012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.264 [2024-11-19 07:38:43.486017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.264 [2024-11-19 07:38:43.486022] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.264 [2024-11-19 07:38:43.486027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.264 [2024-11-19 07:38:43.486032] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.264 [2024-11-19 07:38:43.486040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.264 [2024-11-19 07:38:43.486046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:34.264 [2024-11-19 07:38:43.486052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:34.264 [2024-11-19 07:38:43.486057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:34.264 [2024-11-19 07:38:43.486063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:34.264 [2024-11-19 07:38:43.486068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:34.264 [2024-11-19 07:38:43.486073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:34.264 [2024-11-19 07:38:43.486079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:34.264 [2024-11-19 07:38:43.486084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:34.264 [2024-11-19 07:38:43.486089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:34.264 [2024-11-19 07:38:43.486094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:34.264 [2024-11-19 07:38:43.486100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:34.264 [2024-11-19 07:38:43.486105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:34.264 [2024-11-19 07:38:43.486111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:34.264 [2024-11-19 07:38:43.486116] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.264 [2024-11-19 07:38:43.486122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.264 [2024-11-19 07:38:43.486130] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.264 [2024-11-19 07:38:43.486135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.264 [2024-11-19 07:38:43.486141] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.264 [2024-11-19 07:38:43.486148] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.264 [2024-11-19 07:38:43.486153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.264 [2024-11-19 07:38:43.486159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.264 [2024-11-19 07:38:43.486164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:23:34.264 [2024-11-19 07:38:43.486171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.264 [2024-11-19 07:38:43.498106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.264 [2024-11-19 07:38:43.498134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.264 [2024-11-19 07:38:43.498142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.894 ms 00:23:34.264 [2024-11-19 07:38:43.498148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.264 [2024-11-19 07:38:43.498239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.264 [2024-11-19 07:38:43.498247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.264 [2024-11-19 07:38:43.498253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:34.265 [2024-11-19 07:38:43.498259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.523 [2024-11-19 07:38:43.534930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.523 [2024-11-19 07:38:43.535044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.523 [2024-11-19 07:38:43.535059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.638 ms 00:23:34.523 [2024-11-19 07:38:43.535066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.523 [2024-11-19 07:38:43.535099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.523 [2024-11-19 07:38:43.535108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.523 [2024-11-19 07:38:43.535115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:34.523 [2024-11-19 07:38:43.535125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.523 [2024-11-19 07:38:43.535441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.523 [2024-11-19 07:38:43.535454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.523 [2024-11-19 07:38:43.535461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:34.524 [2024-11-19 07:38:43.535468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.535554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.535560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.524 [2024-11-19 07:38:43.535566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:34.524 [2024-11-19 07:38:43.535572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.546624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.546649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.524 [2024-11-19 07:38:43.546657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.036 ms 00:23:34.524 [2024-11-19 07:38:43.546663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.556234] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:34.524 [2024-11-19 07:38:43.556261] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.524 [2024-11-19 07:38:43.556269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.556276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.524 [2024-11-19 07:38:43.556282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.536 ms 00:23:34.524 [2024-11-19 07:38:43.556288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.574983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.575010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.524 [2024-11-19 07:38:43.575022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.665 ms 00:23:34.524 [2024-11-19 07:38:43.575028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.583965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.583988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.524 [2024-11-19 07:38:43.583995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.908 ms 00:23:34.524 [2024-11-19 07:38:43.584006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.592805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.592895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.524 [2024-11-19 07:38:43.592946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.772 ms 00:23:34.524 [2024-11-19 07:38:43.592963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.593247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.593327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.524 [2024-11-19 07:38:43.593392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:23:34.524 [2024-11-19 07:38:43.593409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.639641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.639794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.524 [2024-11-19 07:38:43.639840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.207 ms 00:23:34.524 [2024-11-19 07:38:43.639858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.648690] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:34.524 [2024-11-19 07:38:43.651030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.651122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.524 [2024-11-19 07:38:43.651166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.884 ms 00:23:34.524 [2024-11-19 07:38:43.651199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.651281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.651336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.524 [2024-11-19 07:38:43.651385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.524 [2024-11-19 07:38:43.651400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.651465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.651489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.524 [2024-11-19 07:38:43.651505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:34.524 [2024-11-19 07:38:43.651519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.652489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.652575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:34.524 [2024-11-19 07:38:43.652616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:23:34.524 [2024-11-19 07:38:43.652638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.652671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.652687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.524 [2024-11-19 07:38:43.652704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:34.524 [2024-11-19 07:38:43.652718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.652752] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.524 [2024-11-19 07:38:43.652891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.652916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.524 [2024-11-19 07:38:43.652932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:23:34.524 [2024-11-19 07:38:43.652947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.671222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.671321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.524 [2024-11-19 07:38:43.671360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:23:34.524 [2024-11-19 07:38:43.671377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.671435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.524 [2024-11-19 07:38:43.671642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.524 [2024-11-19 07:38:43.671678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:34.524 [2024-11-19 07:38:43.671695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.524 [2024-11-19 07:38:43.672794] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 206.138 ms, result 0 00:23:35.460  [2024-11-19T07:38:46.096Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-19T07:38:47.038Z] Copying: 32/1024 [MB] (14 MBps) [2024-11-19T07:38:47.984Z] Copying: 68/1024 [MB] (36 MBps) [2024-11-19T07:38:48.927Z] Copying: 83/1024 [MB] (14 MBps) [2024-11-19T07:38:49.872Z] Copying: 102/1024 [MB] (19 MBps) [2024-11-19T07:38:50.814Z] Copying: 129/1024 [MB] (27 MBps) [2024-11-19T07:38:51.756Z] Copying: 148/1024 [MB] (18 MBps) [2024-11-19T07:38:52.695Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-19T07:38:54.078Z] Copying: 187/1024 [MB] (22 MBps) [2024-11-19T07:38:55.020Z] Copying: 224/1024 [MB] (37 MBps) [2024-11-19T07:38:55.961Z] Copying: 271/1024 [MB] (46 MBps) [2024-11-19T07:38:56.901Z] Copying: 296/1024 [MB] (25 MBps) [2024-11-19T07:38:57.892Z] Copying: 316/1024 [MB] (19 MBps) [2024-11-19T07:38:58.834Z] Copying: 338/1024 [MB] (22 MBps) [2024-11-19T07:38:59.782Z] Copying: 352/1024 [MB] (14 MBps) [2024-11-19T07:39:00.726Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-19T07:39:02.112Z] Copying: 400/1024 [MB] (31 MBps) [2024-11-19T07:39:02.684Z] Copying: 418/1024 [MB] (17 MBps) [2024-11-19T07:39:04.071Z] Copying: 440/1024 [MB] (21 MBps) [2024-11-19T07:39:05.016Z] Copying: 460/1024 [MB] (20 MBps) [2024-11-19T07:39:05.952Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-19T07:39:06.887Z] Copying: 492/1024 [MB] (16 MBps) [2024-11-19T07:39:07.823Z] Copying: 508/1024 [MB] (16 MBps) [2024-11-19T07:39:08.767Z] Copying: 530/1024 [MB] (21 MBps) [2024-11-19T07:39:09.711Z] Copying: 549/1024 [MB] (19 MBps) [2024-11-19T07:39:11.084Z] Copying: 596/1024 [MB] (46 MBps) [2024-11-19T07:39:12.018Z] Copying: 612/1024 [MB] (15 MBps) [2024-11-19T07:39:12.951Z] Copying: 630/1024 [MB] (18 MBps) [2024-11-19T07:39:13.892Z] Copying: 677/1024 [MB] (47 MBps) [2024-11-19T07:39:14.841Z] Copying: 709/1024 [MB] (31 MBps) [2024-11-19T07:39:15.775Z] Copying: 729/1024 [MB] (19 MBps) [2024-11-19T07:39:16.710Z] Copying: 747/1024 [MB] (18 MBps) [2024-11-19T07:39:18.087Z] Copying: 761/1024 [MB] (13 MBps) [2024-11-19T07:39:19.026Z] Copying: 782/1024 [MB] (21 MBps) [2024-11-19T07:39:19.966Z] Copying: 799/1024 [MB] (17 MBps) [2024-11-19T07:39:20.907Z] Copying: 817/1024 [MB] (18 MBps) [2024-11-19T07:39:21.847Z] Copying: 837/1024 [MB] (19 MBps) [2024-11-19T07:39:22.789Z] Copying: 859/1024 [MB] (22 MBps) [2024-11-19T07:39:23.731Z] Copying: 880/1024 [MB] (20 MBps) [2024-11-19T07:39:25.116Z] Copying: 900/1024 [MB] (19 MBps) [2024-11-19T07:39:25.688Z] Copying: 919/1024 [MB] (19 MBps) [2024-11-19T07:39:27.075Z] Copying: 939/1024 [MB] (20 MBps) [2024-11-19T07:39:28.017Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-19T07:39:28.961Z] Copying: 969/1024 [MB] (13 MBps) [2024-11-19T07:39:29.941Z] Copying: 981/1024 [MB] (12 MBps) [2024-11-19T07:39:30.887Z] Copying: 995/1024 [MB] (13 MBps) [2024-11-19T07:39:31.831Z] Copying: 1005/1024 [MB] (10 MBps) [2024-11-19T07:39:32.775Z] Copying: 1016/1024 [MB] (11 MBps) [2024-11-19T07:39:33.038Z] Copying: 1048352/1048576 [kB] (7044 kBps) [2024-11-19T07:39:33.038Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-19 07:39:32.917335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:32.917458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.788 [2024-11-19 07:39:32.917495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:23.788 [2024-11-19 07:39:32.917545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:32.921248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.788 [2024-11-19 07:39:32.926193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:32.926365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.788 [2024-11-19 07:39:32.926450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.448 ms 00:24:23.788 [2024-11-19 07:39:32.926474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:32.940647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:32.940830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.788 [2024-11-19 07:39:32.941021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.534 ms 00:24:23.788 [2024-11-19 07:39:32.941065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:32.967606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:32.967782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.788 [2024-11-19 07:39:32.967957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.498 ms 00:24:23.788 [2024-11-19 07:39:32.967999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:32.974240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:32.974390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:23.788 [2024-11-19 07:39:32.974447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:24:23.788 [2024-11-19 07:39:32.974469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:33.001784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:33.001954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.788 [2024-11-19 07:39:33.002011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.244 ms 00:24:23.788 [2024-11-19 07:39:33.002034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.788 [2024-11-19 07:39:33.018603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.788 [2024-11-19 07:39:33.018776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.788 [2024-11-19 07:39:33.018837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.451 ms 00:24:23.788 [2024-11-19 07:39:33.018859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.049 [2024-11-19 07:39:33.286118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.049 [2024-11-19 07:39:33.286326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.049 [2024-11-19 07:39:33.286351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 267.204 ms 00:24:24.049 [2024-11-19 07:39:33.286360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.312 [2024-11-19 07:39:33.312694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.312 [2024-11-19 07:39:33.312872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:24.312 [2024-11-19 07:39:33.312893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.305 ms 00:24:24.312 [2024-11-19 07:39:33.312901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.312 [2024-11-19 07:39:33.338484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.312 [2024-11-19 07:39:33.338533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:24.312 [2024-11-19 07:39:33.338544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.546 ms 00:24:24.312 [2024-11-19 07:39:33.338551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.312 [2024-11-19 07:39:33.363899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.312 [2024-11-19 07:39:33.364082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:24.312 [2024-11-19 07:39:33.364102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.301 ms 00:24:24.312 [2024-11-19 07:39:33.364110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.312 [2024-11-19 07:39:33.389319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.312 [2024-11-19 07:39:33.389366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:24.312 [2024-11-19 07:39:33.389379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.095 ms 00:24:24.312 [2024-11-19 07:39:33.389386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.312 [2024-11-19 07:39:33.389431] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:24.312 [2024-11-19 07:39:33.389447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97536 / 261120 wr_cnt: 1 state: open 00:24:24.312 [2024-11-19 07:39:33.389458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:24.312 [2024-11-19 07:39:33.389781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.389992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:24.313 [2024-11-19 07:39:33.390312] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:24.313 [2024-11-19 07:39:33.390348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05c4e17a-a781-4bc8-81e7-361c5c102370 00:24:24.313 [2024-11-19 07:39:33.390357] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97536 00:24:24.313 [2024-11-19 07:39:33.390364] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 98496 00:24:24.313 [2024-11-19 07:39:33.390373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97536 00:24:24.313 [2024-11-19 07:39:33.390389] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:24:24.313 [2024-11-19 07:39:33.390398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:24.313 [2024-11-19 07:39:33.390406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:24.313 [2024-11-19 07:39:33.390414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:24.313 [2024-11-19 07:39:33.390421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:24.313 [2024-11-19 07:39:33.390428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:24.313 [2024-11-19 07:39:33.390435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.313 [2024-11-19 07:39:33.390443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:24.313 [2024-11-19 07:39:33.390453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:24:24.313 [2024-11-19 07:39:33.390461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.313 [2024-11-19 07:39:33.403833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.313 [2024-11-19 07:39:33.403879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.313 [2024-11-19 07:39:33.403891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.332 ms 00:24:24.313 [2024-11-19 07:39:33.403899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.313 [2024-11-19 07:39:33.404131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.313 [2024-11-19 07:39:33.404141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.313 [2024-11-19 07:39:33.404157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:24:24.313 [2024-11-19 07:39:33.404164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.313 [2024-11-19 07:39:33.443416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.313 [2024-11-19 07:39:33.443466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.313 [2024-11-19 07:39:33.443478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.313 [2024-11-19 07:39:33.443487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.313 [2024-11-19 07:39:33.443552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.313 [2024-11-19 07:39:33.443561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.313 [2024-11-19 07:39:33.443575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.313 [2024-11-19 07:39:33.443584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.313 [2024-11-19 07:39:33.443664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.443674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.314 [2024-11-19 07:39:33.443683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.443691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.443707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.443715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.314 [2024-11-19 07:39:33.443723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.443735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.524891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.524947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.314 [2024-11-19 07:39:33.524960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.524969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.314 [2024-11-19 07:39:33.557396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.314 [2024-11-19 07:39:33.557489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.314 [2024-11-19 07:39:33.557569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.314 [2024-11-19 07:39:33.557706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.314 [2024-11-19 07:39:33.557759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.314 [2024-11-19 07:39:33.557830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.557888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.314 [2024-11-19 07:39:33.557898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.314 [2024-11-19 07:39:33.557906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.314 [2024-11-19 07:39:33.557915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.314 [2024-11-19 07:39:33.558048] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 641.437 ms, result 0 00:24:25.703 00:24:25.703 00:24:25.703 07:39:34 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:28.249 07:39:36 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.249 [2024-11-19 07:39:36.998140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.249 [2024-11-19 07:39:36.998421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77308 ] 00:24:28.249 [2024-11-19 07:39:37.150237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.249 [2024-11-19 07:39:37.372120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.510 [2024-11-19 07:39:37.664995] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.510 [2024-11-19 07:39:37.665309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.772 [2024-11-19 07:39:37.820382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.820598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.772 [2024-11-19 07:39:37.820623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:28.772 [2024-11-19 07:39:37.820638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.820709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.820720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.772 [2024-11-19 07:39:37.820729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:28.772 [2024-11-19 07:39:37.820737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.820759] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.772 [2024-11-19 07:39:37.821562] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.772 [2024-11-19 07:39:37.821584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.821594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.772 [2024-11-19 07:39:37.821604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:24:28.772 [2024-11-19 07:39:37.821612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.823259] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.772 [2024-11-19 07:39:37.837570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.837622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.772 [2024-11-19 07:39:37.837637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.313 ms 00:24:28.772 [2024-11-19 07:39:37.837645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.837722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.837732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.772 [2024-11-19 07:39:37.837740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:28.772 [2024-11-19 07:39:37.837748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.846073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.846119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.772 [2024-11-19 07:39:37.846130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.245 ms 00:24:28.772 [2024-11-19 07:39:37.846139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.846260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.846271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.772 [2024-11-19 07:39:37.846280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:28.772 [2024-11-19 07:39:37.846290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.846335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.846362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.772 [2024-11-19 07:39:37.846371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:28.772 [2024-11-19 07:39:37.846378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.846409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.772 [2024-11-19 07:39:37.850636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.850676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.772 [2024-11-19 07:39:37.850686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.239 ms 00:24:28.772 [2024-11-19 07:39:37.850694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.850731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.850741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.772 [2024-11-19 07:39:37.850753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:28.772 [2024-11-19 07:39:37.850760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.850810] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.772 [2024-11-19 07:39:37.850834] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:28.772 [2024-11-19 07:39:37.850869] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.772 [2024-11-19 07:39:37.850886] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:28.772 [2024-11-19 07:39:37.850962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:28.772 [2024-11-19 07:39:37.850975] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.772 [2024-11-19 07:39:37.850986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:28.772 [2024-11-19 07:39:37.850996] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.772 [2024-11-19 07:39:37.851005] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.772 [2024-11-19 07:39:37.851013] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:28.772 [2024-11-19 07:39:37.851021] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.772 [2024-11-19 07:39:37.851029] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:28.772 [2024-11-19 07:39:37.851036] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:28.772 [2024-11-19 07:39:37.851044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.772 [2024-11-19 07:39:37.851052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.772 [2024-11-19 07:39:37.851060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:24:28.772 [2024-11-19 07:39:37.851070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.772 [2024-11-19 07:39:37.851133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.773 [2024-11-19 07:39:37.851142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.773 [2024-11-19 07:39:37.851150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:28.773 [2024-11-19 07:39:37.851158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.773 [2024-11-19 07:39:37.851245] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.773 [2024-11-19 07:39:37.851255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.773 [2024-11-19 07:39:37.851265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.773 [2024-11-19 07:39:37.851292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.773 [2024-11-19 07:39:37.851316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.773 [2024-11-19 07:39:37.851330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.773 [2024-11-19 07:39:37.851337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:28.773 [2024-11-19 07:39:37.851346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.773 [2024-11-19 07:39:37.851353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.773 [2024-11-19 07:39:37.851360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:28.773 [2024-11-19 07:39:37.851367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.773 [2024-11-19 07:39:37.851389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:28.773 [2024-11-19 07:39:37.851395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:28.773 [2024-11-19 07:39:37.851408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:28.773 [2024-11-19 07:39:37.851415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.773 [2024-11-19 07:39:37.851429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.773 [2024-11-19 07:39:37.851449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.773 [2024-11-19 07:39:37.851468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.773 [2024-11-19 07:39:37.851488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.773 [2024-11-19 07:39:37.851507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.773 [2024-11-19 07:39:37.851520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.773 [2024-11-19 07:39:37.851527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:28.773 [2024-11-19 07:39:37.851533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.773 [2024-11-19 07:39:37.851538] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.773 [2024-11-19 07:39:37.851546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.773 [2024-11-19 07:39:37.851554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.773 [2024-11-19 07:39:37.851570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.773 [2024-11-19 07:39:37.851578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.773 [2024-11-19 07:39:37.851584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.773 [2024-11-19 07:39:37.851591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.773 [2024-11-19 07:39:37.851598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.773 [2024-11-19 07:39:37.851604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.773 [2024-11-19 07:39:37.851612] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.773 [2024-11-19 07:39:37.851622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.773 [2024-11-19 07:39:37.851631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:28.773 [2024-11-19 07:39:37.851638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:28.773 [2024-11-19 07:39:37.851646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:28.773 [2024-11-19 07:39:37.851653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:28.773 [2024-11-19 07:39:37.851660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:28.773 [2024-11-19 07:39:37.851667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:28.773 [2024-11-19 07:39:37.851673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:28.773 [2024-11-19 07:39:37.851680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:28.773 [2024-11-19 07:39:37.851688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:28.773 [2024-11-19 07:39:37.851694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:28.773 [2024-11-19 07:39:37.851701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:28.773 [2024-11-19 07:39:37.851708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:28.773 [2024-11-19 07:39:37.851716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:28.773 [2024-11-19 07:39:37.851723] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.773 [2024-11-19 07:39:37.851732] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.773 [2024-11-19 07:39:37.851740] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.773 [2024-11-19 07:39:37.851747] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.774 [2024-11-19 07:39:37.851755] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.774 [2024-11-19 07:39:37.851762] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.774 [2024-11-19 07:39:37.851770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.851778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.774 [2024-11-19 07:39:37.851786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:28.774 [2024-11-19 07:39:37.851796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.869964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.870160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.774 [2024-11-19 07:39:37.870201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.128 ms 00:24:28.774 [2024-11-19 07:39:37.870217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.870312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.870321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:28.774 [2024-11-19 07:39:37.870329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:28.774 [2024-11-19 07:39:37.870338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.919892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.919950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.774 [2024-11-19 07:39:37.919964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.498 ms 00:24:28.774 [2024-11-19 07:39:37.919973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.920024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.920034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.774 [2024-11-19 07:39:37.920044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:28.774 [2024-11-19 07:39:37.920052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.920663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.920696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.774 [2024-11-19 07:39:37.920713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:24:28.774 [2024-11-19 07:39:37.920722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.920849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.920860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.774 [2024-11-19 07:39:37.920869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:28.774 [2024-11-19 07:39:37.920877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.937603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.937790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.774 [2024-11-19 07:39:37.937811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.700 ms 00:24:28.774 [2024-11-19 07:39:37.937820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.952528] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:28.774 [2024-11-19 07:39:37.952711] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:28.774 [2024-11-19 07:39:37.952729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.952738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:28.774 [2024-11-19 07:39:37.952747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.789 ms 00:24:28.774 [2024-11-19 07:39:37.952754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.978929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.978978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:28.774 [2024-11-19 07:39:37.978990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.131 ms 00:24:28.774 [2024-11-19 07:39:37.978998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:37.992018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:37.992067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:28.774 [2024-11-19 07:39:37.992079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.958 ms 00:24:28.774 [2024-11-19 07:39:37.992086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:38.004845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:38.004892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:28.774 [2024-11-19 07:39:38.004914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.711 ms 00:24:28.774 [2024-11-19 07:39:38.004922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.774 [2024-11-19 07:39:38.005340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.774 [2024-11-19 07:39:38.005356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:28.774 [2024-11-19 07:39:38.005367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:28.774 [2024-11-19 07:39:38.005376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.072438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.072500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.035 [2024-11-19 07:39:38.072515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.043 ms 00:24:29.035 [2024-11-19 07:39:38.072524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.084306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:29.035 [2024-11-19 07:39:38.087452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.087499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.035 [2024-11-19 07:39:38.087517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.865 ms 00:24:29.035 [2024-11-19 07:39:38.087525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.087599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.087610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.035 [2024-11-19 07:39:38.087619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.035 [2024-11-19 07:39:38.087627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.089057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.089107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.035 [2024-11-19 07:39:38.089117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:24:29.035 [2024-11-19 07:39:38.089132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.090579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.090765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:29.035 [2024-11-19 07:39:38.090787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:24:29.035 [2024-11-19 07:39:38.090795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.090837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.090854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.035 [2024-11-19 07:39:38.090863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.035 [2024-11-19 07:39:38.090871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.090909] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.035 [2024-11-19 07:39:38.090923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.090930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.035 [2024-11-19 07:39:38.090939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:29.035 [2024-11-19 07:39:38.090946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.117375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.117567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.035 [2024-11-19 07:39:38.117589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.409 ms 00:24:29.035 [2024-11-19 07:39:38.117606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.035 [2024-11-19 07:39:38.117782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.035 [2024-11-19 07:39:38.117812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.035 [2024-11-19 07:39:38.117823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:29.036 [2024-11-19 07:39:38.117831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.036 [2024-11-19 07:39:38.124529] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.320 ms, result 0 00:24:30.421  [2024-11-19T07:39:40.615Z] Copying: 1132/1048576 [kB] (1132 kBps) [2024-11-19T07:39:41.559Z] Copying: 4436/1048576 [kB] (3304 kBps) [2024-11-19T07:39:42.503Z] Copying: 16/1024 [MB] (11 MBps) [2024-11-19T07:39:43.447Z] Copying: 39/1024 [MB] (23 MBps) [2024-11-19T07:39:44.449Z] Copying: 61/1024 [MB] (21 MBps) [2024-11-19T07:39:45.403Z] Copying: 81/1024 [MB] (20 MBps) [2024-11-19T07:39:46.348Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-19T07:39:47.735Z] Copying: 137/1024 [MB] (27 MBps) [2024-11-19T07:39:48.303Z] Copying: 163/1024 [MB] (25 MBps) [2024-11-19T07:39:49.679Z] Copying: 191/1024 [MB] (28 MBps) [2024-11-19T07:39:50.613Z] Copying: 219/1024 [MB] (27 MBps) [2024-11-19T07:39:51.547Z] Copying: 238/1024 [MB] (18 MBps) [2024-11-19T07:39:52.488Z] Copying: 256/1024 [MB] (18 MBps) [2024-11-19T07:39:53.428Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-19T07:39:54.373Z] Copying: 293/1024 [MB] (17 MBps) [2024-11-19T07:39:55.308Z] Copying: 320/1024 [MB] (27 MBps) [2024-11-19T07:39:56.681Z] Copying: 343/1024 [MB] (23 MBps) [2024-11-19T07:39:57.624Z] Copying: 368/1024 [MB] (25 MBps) [2024-11-19T07:39:58.566Z] Copying: 383/1024 [MB] (14 MBps) [2024-11-19T07:39:59.508Z] Copying: 398/1024 [MB] (15 MBps) [2024-11-19T07:40:00.504Z] Copying: 413/1024 [MB] (14 MBps) [2024-11-19T07:40:01.449Z] Copying: 427/1024 [MB] (14 MBps) [2024-11-19T07:40:02.394Z] Copying: 441/1024 [MB] (14 MBps) [2024-11-19T07:40:03.338Z] Copying: 456/1024 [MB] (15 MBps) [2024-11-19T07:40:04.722Z] Copying: 472/1024 [MB] (15 MBps) [2024-11-19T07:40:05.656Z] Copying: 487/1024 [MB] (15 MBps) [2024-11-19T07:40:06.591Z] Copying: 504/1024 [MB] (17 MBps) [2024-11-19T07:40:07.526Z] Copying: 522/1024 [MB] (17 MBps) [2024-11-19T07:40:08.461Z] Copying: 540/1024 [MB] (17 MBps) [2024-11-19T07:40:09.394Z] Copying: 557/1024 [MB] (17 MBps) [2024-11-19T07:40:10.327Z] Copying: 575/1024 [MB] (17 MBps) [2024-11-19T07:40:11.701Z] Copying: 593/1024 [MB] (17 MBps) [2024-11-19T07:40:12.635Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-19T07:40:13.569Z] Copying: 629/1024 [MB] (17 MBps) [2024-11-19T07:40:14.504Z] Copying: 647/1024 [MB] (17 MBps) [2024-11-19T07:40:15.447Z] Copying: 665/1024 [MB] (18 MBps) [2024-11-19T07:40:16.388Z] Copying: 682/1024 [MB] (16 MBps) [2024-11-19T07:40:17.322Z] Copying: 699/1024 [MB] (17 MBps) [2024-11-19T07:40:18.695Z] Copying: 718/1024 [MB] (19 MBps) [2024-11-19T07:40:19.628Z] Copying: 736/1024 [MB] (17 MBps) [2024-11-19T07:40:20.564Z] Copying: 755/1024 [MB] (18 MBps) [2024-11-19T07:40:21.503Z] Copying: 773/1024 [MB] (18 MBps) [2024-11-19T07:40:22.444Z] Copying: 791/1024 [MB] (17 MBps) [2024-11-19T07:40:23.402Z] Copying: 809/1024 [MB] (18 MBps) [2024-11-19T07:40:24.344Z] Copying: 830/1024 [MB] (20 MBps) [2024-11-19T07:40:25.729Z] Copying: 848/1024 [MB] (18 MBps) [2024-11-19T07:40:26.303Z] Copying: 864/1024 [MB] (15 MBps) [2024-11-19T07:40:27.690Z] Copying: 879/1024 [MB] (15 MBps) [2024-11-19T07:40:28.635Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-19T07:40:29.576Z] Copying: 910/1024 [MB] (15 MBps) [2024-11-19T07:40:30.509Z] Copying: 926/1024 [MB] (15 MBps) [2024-11-19T07:40:31.445Z] Copying: 944/1024 [MB] (18 MBps) [2024-11-19T07:40:32.380Z] Copying: 962/1024 [MB] (18 MBps) [2024-11-19T07:40:33.316Z] Copying: 981/1024 [MB] (18 MBps) [2024-11-19T07:40:34.694Z] Copying: 999/1024 [MB] (18 MBps) [2024-11-19T07:40:34.694Z] Copying: 1017/1024 [MB] (18 MBps) [2024-11-19T07:40:34.694Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-19 07:40:34.691812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.444 [2024-11-19 07:40:34.691948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:25.444 [2024-11-19 07:40:34.692009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:25.444 [2024-11-19 07:40:34.692032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.444 [2024-11-19 07:40:34.692068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.444 [2024-11-19 07:40:34.694813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.444 [2024-11-19 07:40:34.694955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:25.444 [2024-11-19 07:40:34.695007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:25:25.444 [2024-11-19 07:40:34.695035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.444 [2024-11-19 07:40:34.695268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.444 [2024-11-19 07:40:34.695294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:25.444 [2024-11-19 07:40:34.695357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:25.444 [2024-11-19 07:40:34.695379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.703 [2024-11-19 07:40:34.708192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.703 [2024-11-19 07:40:34.708301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:25.704 [2024-11-19 07:40:34.708354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.786 ms 00:25:25.704 [2024-11-19 07:40:34.708375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.714520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.714621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:25.704 [2024-11-19 07:40:34.714671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:25:25.704 [2024-11-19 07:40:34.714692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.739294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.739408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:25.704 [2024-11-19 07:40:34.739457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.547 ms 00:25:25.704 [2024-11-19 07:40:34.739478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.753671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.753773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:25.704 [2024-11-19 07:40:34.753820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.155 ms 00:25:25.704 [2024-11-19 07:40:34.753841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.762518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.762636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:25.704 [2024-11-19 07:40:34.762697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.417 ms 00:25:25.704 [2024-11-19 07:40:34.762720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.786380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.786484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:25.704 [2024-11-19 07:40:34.786531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.628 ms 00:25:25.704 [2024-11-19 07:40:34.786552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.810113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.810227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:25.704 [2024-11-19 07:40:34.810432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:25:25.704 [2024-11-19 07:40:34.810462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.833208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.833306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:25.704 [2024-11-19 07:40:34.833351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.709 ms 00:25:25.704 [2024-11-19 07:40:34.833371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.856325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.704 [2024-11-19 07:40:34.856424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:25.704 [2024-11-19 07:40:34.856469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.888 ms 00:25:25.704 [2024-11-19 07:40:34.856489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.704 [2024-11-19 07:40:34.856524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:25.704 [2024-11-19 07:40:34.856550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:25.704 [2024-11-19 07:40:34.856581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:25.704 [2024-11-19 07:40:34.856609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.856991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.857984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:25.704 [2024-11-19 07:40:34.858570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:25.705 [2024-11-19 07:40:34.858891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:25.705 [2024-11-19 07:40:34.858899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05c4e17a-a781-4bc8-81e7-361c5c102370 00:25:25.705 [2024-11-19 07:40:34.858910] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:25.705 [2024-11-19 07:40:34.858917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 168896 00:25:25.705 [2024-11-19 07:40:34.858924] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 166912 00:25:25.705 [2024-11-19 07:40:34.858933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:25:25.705 [2024-11-19 07:40:34.858939] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:25.705 [2024-11-19 07:40:34.858947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:25.705 [2024-11-19 07:40:34.858954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:25.705 [2024-11-19 07:40:34.858960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:25.705 [2024-11-19 07:40:34.858972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:25.705 [2024-11-19 07:40:34.858980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.705 [2024-11-19 07:40:34.858988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:25.705 [2024-11-19 07:40:34.858996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:25:25.705 [2024-11-19 07:40:34.859003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.871410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.705 [2024-11-19 07:40:34.871441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:25.705 [2024-11-19 07:40:34.871451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 00:25:25.705 [2024-11-19 07:40:34.871457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.871651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.705 [2024-11-19 07:40:34.871660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:25.705 [2024-11-19 07:40:34.871668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:25:25.705 [2024-11-19 07:40:34.871678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.906717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.705 [2024-11-19 07:40:34.906747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.705 [2024-11-19 07:40:34.906757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.705 [2024-11-19 07:40:34.906764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.906810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.705 [2024-11-19 07:40:34.906817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.705 [2024-11-19 07:40:34.906824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.705 [2024-11-19 07:40:34.906835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.906890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.705 [2024-11-19 07:40:34.906900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.705 [2024-11-19 07:40:34.906907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.705 [2024-11-19 07:40:34.906914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.705 [2024-11-19 07:40:34.906929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.705 [2024-11-19 07:40:34.906936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.705 [2024-11-19 07:40:34.906942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.705 [2024-11-19 07:40:34.906949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:34.980744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:34.980782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.964 [2024-11-19 07:40:34.980792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:34.980800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:35.009097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:35.009127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.964 [2024-11-19 07:40:35.009137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:35.009148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:35.009219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:35.009230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.964 [2024-11-19 07:40:35.009238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:35.009244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:35.009282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:35.009290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.964 [2024-11-19 07:40:35.009297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:35.009304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:35.009388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:35.009397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.964 [2024-11-19 07:40:35.009405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:35.009412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.964 [2024-11-19 07:40:35.009441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.964 [2024-11-19 07:40:35.009449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:25.964 [2024-11-19 07:40:35.009457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.964 [2024-11-19 07:40:35.009464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.965 [2024-11-19 07:40:35.009500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.965 [2024-11-19 07:40:35.009508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.965 [2024-11-19 07:40:35.009516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.965 [2024-11-19 07:40:35.009523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.965 [2024-11-19 07:40:35.009570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.965 [2024-11-19 07:40:35.009579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.965 [2024-11-19 07:40:35.009586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.965 [2024-11-19 07:40:35.009593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.965 [2024-11-19 07:40:35.009699] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 317.860 ms, result 0 00:25:26.900 00:25:26.900 00:25:26.900 07:40:35 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:28.814 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:28.814 07:40:37 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:28.814 [2024-11-19 07:40:38.053224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:28.814 [2024-11-19 07:40:38.053598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77939 ] 00:25:29.075 [2024-11-19 07:40:38.209673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.333 [2024-11-19 07:40:38.432554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.590 [2024-11-19 07:40:38.711953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.590 [2024-11-19 07:40:38.712011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.848 [2024-11-19 07:40:38.862579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.862622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:29.848 [2024-11-19 07:40:38.862635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:29.848 [2024-11-19 07:40:38.862645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.862689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.862699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.848 [2024-11-19 07:40:38.862707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:29.848 [2024-11-19 07:40:38.862714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.862730] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:29.848 [2024-11-19 07:40:38.863463] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:29.848 [2024-11-19 07:40:38.863478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.863486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.848 [2024-11-19 07:40:38.863494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:29.848 [2024-11-19 07:40:38.863501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.864537] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:29.848 [2024-11-19 07:40:38.877267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.877395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:29.848 [2024-11-19 07:40:38.877413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:25:29.848 [2024-11-19 07:40:38.877420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.877466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.877476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:29.848 [2024-11-19 07:40:38.877484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:29.848 [2024-11-19 07:40:38.877490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.882299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.882325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.848 [2024-11-19 07:40:38.882334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:25:29.848 [2024-11-19 07:40:38.882342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.882421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.882430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.848 [2024-11-19 07:40:38.882438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:29.848 [2024-11-19 07:40:38.882445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.882484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.882493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:29.848 [2024-11-19 07:40:38.882501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:29.848 [2024-11-19 07:40:38.882508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.882533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.848 [2024-11-19 07:40:38.885980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.886006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.848 [2024-11-19 07:40:38.886015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:25:29.848 [2024-11-19 07:40:38.886022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.886051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.886059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:29.848 [2024-11-19 07:40:38.886066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:29.848 [2024-11-19 07:40:38.886075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.886093] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:29.848 [2024-11-19 07:40:38.886109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:29.848 [2024-11-19 07:40:38.886140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:29.848 [2024-11-19 07:40:38.886155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:29.848 [2024-11-19 07:40:38.886237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:29.848 [2024-11-19 07:40:38.886248] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:29.848 [2024-11-19 07:40:38.886260] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:29.848 [2024-11-19 07:40:38.886269] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886278] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886286] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:29.848 [2024-11-19 07:40:38.886293] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:29.848 [2024-11-19 07:40:38.886300] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:29.848 [2024-11-19 07:40:38.886307] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:29.848 [2024-11-19 07:40:38.886314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.886321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:29.848 [2024-11-19 07:40:38.886329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:25:29.848 [2024-11-19 07:40:38.886336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.886396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.848 [2024-11-19 07:40:38.886404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:29.848 [2024-11-19 07:40:38.886411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:29.848 [2024-11-19 07:40:38.886417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.848 [2024-11-19 07:40:38.886494] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:29.848 [2024-11-19 07:40:38.886504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:29.848 [2024-11-19 07:40:38.886512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:29.848 [2024-11-19 07:40:38.886532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:29.848 [2024-11-19 07:40:38.886552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.848 [2024-11-19 07:40:38.886566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:29.848 [2024-11-19 07:40:38.886572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:29.848 [2024-11-19 07:40:38.886579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.848 [2024-11-19 07:40:38.886585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:29.848 [2024-11-19 07:40:38.886591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:29.848 [2024-11-19 07:40:38.886598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:29.848 [2024-11-19 07:40:38.886616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:29.848 [2024-11-19 07:40:38.886622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:29.848 [2024-11-19 07:40:38.886635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:29.848 [2024-11-19 07:40:38.886641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:29.848 [2024-11-19 07:40:38.886654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:29.848 [2024-11-19 07:40:38.886672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:29.848 [2024-11-19 07:40:38.886690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:29.848 [2024-11-19 07:40:38.886709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:29.848 [2024-11-19 07:40:38.886728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.848 [2024-11-19 07:40:38.886740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:29.848 [2024-11-19 07:40:38.886746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:29.848 [2024-11-19 07:40:38.886752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.848 [2024-11-19 07:40:38.886758] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:29.848 [2024-11-19 07:40:38.886767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:29.848 [2024-11-19 07:40:38.886775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.848 [2024-11-19 07:40:38.886782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.848 [2024-11-19 07:40:38.886789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:29.848 [2024-11-19 07:40:38.886796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:29.849 [2024-11-19 07:40:38.886802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:29.849 [2024-11-19 07:40:38.886808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:29.849 [2024-11-19 07:40:38.886815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:29.849 [2024-11-19 07:40:38.886821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:29.849 [2024-11-19 07:40:38.886828] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:29.849 [2024-11-19 07:40:38.886837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.849 [2024-11-19 07:40:38.886845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:29.849 [2024-11-19 07:40:38.886852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:29.849 [2024-11-19 07:40:38.886859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:29.849 [2024-11-19 07:40:38.886866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:29.849 [2024-11-19 07:40:38.886873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:29.849 [2024-11-19 07:40:38.886879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:29.849 [2024-11-19 07:40:38.886886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:29.849 [2024-11-19 07:40:38.886893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:29.849 [2024-11-19 07:40:38.886899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:29.849 [2024-11-19 07:40:38.886906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:29.849 [2024-11-19 07:40:38.886914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:29.849 [2024-11-19 07:40:38.886920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:29.849 [2024-11-19 07:40:38.886928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:29.849 [2024-11-19 07:40:38.886934] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:29.849 [2024-11-19 07:40:38.886941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.849 [2024-11-19 07:40:38.886949] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:29.849 [2024-11-19 07:40:38.886956] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:29.849 [2024-11-19 07:40:38.886964] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:29.849 [2024-11-19 07:40:38.886970] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:29.849 [2024-11-19 07:40:38.886977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.886984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:29.849 [2024-11-19 07:40:38.886991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:29.849 [2024-11-19 07:40:38.886998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.901768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.901798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.849 [2024-11-19 07:40:38.901808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:25:29.849 [2024-11-19 07:40:38.901818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.901902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.901909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:29.849 [2024-11-19 07:40:38.901917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:29.849 [2024-11-19 07:40:38.901923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.949725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.949762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.849 [2024-11-19 07:40:38.949774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.761 ms 00:25:29.849 [2024-11-19 07:40:38.949782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.949819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.949828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.849 [2024-11-19 07:40:38.949836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:29.849 [2024-11-19 07:40:38.949843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.950177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.950208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.849 [2024-11-19 07:40:38.950217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:25:29.849 [2024-11-19 07:40:38.950228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.950337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.950348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.849 [2024-11-19 07:40:38.950356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:29.849 [2024-11-19 07:40:38.950363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.963902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.963932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.849 [2024-11-19 07:40:38.963942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.520 ms 00:25:29.849 [2024-11-19 07:40:38.963949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:38.976604] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:29.849 [2024-11-19 07:40:38.976636] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:29.849 [2024-11-19 07:40:38.976646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:38.976654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:29.849 [2024-11-19 07:40:38.976663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.614 ms 00:25:29.849 [2024-11-19 07:40:38.976669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.000944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.000988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:29.849 [2024-11-19 07:40:39.000999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.239 ms 00:25:29.849 [2024-11-19 07:40:39.001007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.012855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.012884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:29.849 [2024-11-19 07:40:39.012893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.813 ms 00:25:29.849 [2024-11-19 07:40:39.012900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.024452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.024486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:29.849 [2024-11-19 07:40:39.024495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.522 ms 00:25:29.849 [2024-11-19 07:40:39.024502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.024844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.024855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:29.849 [2024-11-19 07:40:39.024863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:29.849 [2024-11-19 07:40:39.024870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.082337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.082376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:29.849 [2024-11-19 07:40:39.082388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.452 ms 00:25:29.849 [2024-11-19 07:40:39.082396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.093021] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:29.849 [2024-11-19 07:40:39.095137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.095165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:29.849 [2024-11-19 07:40:39.095176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.700 ms 00:25:29.849 [2024-11-19 07:40:39.095197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.095257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.095267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:29.849 [2024-11-19 07:40:39.095276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.849 [2024-11-19 07:40:39.095285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.095825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.095841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:29.849 [2024-11-19 07:40:39.095849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:25:29.849 [2024-11-19 07:40:39.095857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.097041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.097069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:29.849 [2024-11-19 07:40:39.097078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:25:29.849 [2024-11-19 07:40:39.097085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.849 [2024-11-19 07:40:39.097111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.849 [2024-11-19 07:40:39.097119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:29.850 [2024-11-19 07:40:39.097130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.850 [2024-11-19 07:40:39.097136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.850 [2024-11-19 07:40:39.097166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:29.850 [2024-11-19 07:40:39.097175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.850 [2024-11-19 07:40:39.097198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:29.850 [2024-11-19 07:40:39.097206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:29.850 [2024-11-19 07:40:39.097212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.111 [2024-11-19 07:40:39.120684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.111 [2024-11-19 07:40:39.120715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.111 [2024-11-19 07:40:39.120725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.455 ms 00:25:30.111 [2024-11-19 07:40:39.120732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.111 [2024-11-19 07:40:39.120798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.111 [2024-11-19 07:40:39.120806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.111 [2024-11-19 07:40:39.120814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:30.111 [2024-11-19 07:40:39.120821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.112 [2024-11-19 07:40:39.121680] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.699 ms, result 0 00:25:31.045  [2024-11-19T07:40:41.671Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-19T07:40:42.612Z] Copying: 27/1024 [MB] (12 MBps) [2024-11-19T07:40:43.553Z] Copying: 38/1024 [MB] (11 MBps) [2024-11-19T07:40:44.498Z] Copying: 49652/1048576 [kB] (10196 kBps) [2024-11-19T07:40:45.445Z] Copying: 59876/1048576 [kB] (10224 kBps) [2024-11-19T07:40:46.388Z] Copying: 68/1024 [MB] (10 MBps) [2024-11-19T07:40:47.331Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-19T07:40:48.718Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-19T07:40:49.299Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-19T07:40:50.679Z] Copying: 109/1024 [MB] (10 MBps) [2024-11-19T07:40:51.612Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-19T07:40:52.544Z] Copying: 132/1024 [MB] (11 MBps) [2024-11-19T07:40:53.479Z] Copying: 144/1024 [MB] (12 MBps) [2024-11-19T07:40:54.421Z] Copying: 157/1024 [MB] (12 MBps) [2024-11-19T07:40:55.355Z] Copying: 168/1024 [MB] (11 MBps) [2024-11-19T07:40:56.307Z] Copying: 181/1024 [MB] (12 MBps) [2024-11-19T07:40:57.693Z] Copying: 193/1024 [MB] (12 MBps) [2024-11-19T07:40:58.628Z] Copying: 205/1024 [MB] (12 MBps) [2024-11-19T07:40:59.563Z] Copying: 217/1024 [MB] (12 MBps) [2024-11-19T07:41:00.496Z] Copying: 229/1024 [MB] (11 MBps) [2024-11-19T07:41:01.434Z] Copying: 241/1024 [MB] (12 MBps) [2024-11-19T07:41:02.373Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-19T07:41:03.314Z] Copying: 273/1024 [MB] (19 MBps) [2024-11-19T07:41:04.700Z] Copying: 287/1024 [MB] (14 MBps) [2024-11-19T07:41:05.643Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-19T07:41:06.585Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-19T07:41:07.526Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-19T07:41:08.462Z] Copying: 329/1024 [MB] (10 MBps) [2024-11-19T07:41:09.400Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-19T07:41:10.338Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-19T07:41:11.718Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-19T07:41:12.697Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-19T07:41:13.642Z] Copying: 385/1024 [MB] (12 MBps) [2024-11-19T07:41:14.587Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-19T07:41:15.524Z] Copying: 407/1024 [MB] (10 MBps) [2024-11-19T07:41:16.456Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-19T07:41:17.390Z] Copying: 435/1024 [MB] (17 MBps) [2024-11-19T07:41:18.324Z] Copying: 448/1024 [MB] (13 MBps) [2024-11-19T07:41:19.698Z] Copying: 460/1024 [MB] (12 MBps) [2024-11-19T07:41:20.642Z] Copying: 472/1024 [MB] (11 MBps) [2024-11-19T07:41:21.586Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-19T07:41:22.524Z] Copying: 505160/1048576 [kB] (9884 kBps) [2024-11-19T07:41:23.466Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-19T07:41:24.398Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-19T07:41:25.332Z] Copying: 526/1024 [MB] (11 MBps) [2024-11-19T07:41:26.715Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-19T07:41:27.306Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-19T07:41:28.694Z] Copying: 561/1024 [MB] (12 MBps) [2024-11-19T07:41:29.628Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-19T07:41:30.561Z] Copying: 585/1024 [MB] (12 MBps) [2024-11-19T07:41:31.495Z] Copying: 598/1024 [MB] (12 MBps) [2024-11-19T07:41:32.427Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-19T07:41:33.360Z] Copying: 623/1024 [MB] (12 MBps) [2024-11-19T07:41:34.293Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-19T07:41:35.674Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-19T07:41:36.618Z] Copying: 657/1024 [MB] (10 MBps) [2024-11-19T07:41:37.560Z] Copying: 683168/1048576 [kB] (9632 kBps) [2024-11-19T07:41:38.496Z] Copying: 692784/1048576 [kB] (9616 kBps) [2024-11-19T07:41:39.428Z] Copying: 689/1024 [MB] (13 MBps) [2024-11-19T07:41:40.362Z] Copying: 700/1024 [MB] (11 MBps) [2024-11-19T07:41:41.306Z] Copying: 724/1024 [MB] (23 MBps) [2024-11-19T07:41:42.691Z] Copying: 746/1024 [MB] (21 MBps) [2024-11-19T07:41:43.633Z] Copying: 762/1024 [MB] (16 MBps) [2024-11-19T07:41:44.571Z] Copying: 776/1024 [MB] (14 MBps) [2024-11-19T07:41:45.532Z] Copying: 794/1024 [MB] (17 MBps) [2024-11-19T07:41:46.530Z] Copying: 815/1024 [MB] (21 MBps) [2024-11-19T07:41:47.463Z] Copying: 830/1024 [MB] (15 MBps) [2024-11-19T07:41:48.394Z] Copying: 848/1024 [MB] (17 MBps) [2024-11-19T07:41:49.328Z] Copying: 865/1024 [MB] (16 MBps) [2024-11-19T07:41:50.706Z] Copying: 883/1024 [MB] (18 MBps) [2024-11-19T07:41:51.658Z] Copying: 903/1024 [MB] (20 MBps) [2024-11-19T07:41:52.598Z] Copying: 920/1024 [MB] (16 MBps) [2024-11-19T07:41:53.540Z] Copying: 935/1024 [MB] (14 MBps) [2024-11-19T07:41:54.484Z] Copying: 950/1024 [MB] (14 MBps) [2024-11-19T07:41:55.429Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-19T07:41:56.372Z] Copying: 972/1024 [MB] (12 MBps) [2024-11-19T07:41:57.314Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-19T07:41:58.698Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-19T07:41:58.698Z] Copying: 1020/1024 [MB] (21 MBps) [2024-11-19T07:41:58.958Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-19 07:41:58.856954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.857087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:49.708 [2024-11-19 07:41:58.857123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:49.708 [2024-11-19 07:41:58.857145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.857266] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:49.708 [2024-11-19 07:41:58.866257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.866314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:49.708 [2024-11-19 07:41:58.866327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.950 ms 00:26:49.708 [2024-11-19 07:41:58.866335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.866624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.866636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:49.708 [2024-11-19 07:41:58.866645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:26:49.708 [2024-11-19 07:41:58.866654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.870151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.870199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:49.708 [2024-11-19 07:41:58.870215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.483 ms 00:26:49.708 [2024-11-19 07:41:58.870224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.876414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.876463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:49.708 [2024-11-19 07:41:58.876475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.169 ms 00:26:49.708 [2024-11-19 07:41:58.876484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.902741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.902957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:49.708 [2024-11-19 07:41:58.902976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.196 ms 00:26:49.708 [2024-11-19 07:41:58.902983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.918206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.918246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:49.708 [2024-11-19 07:41:58.918258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.186 ms 00:26:49.708 [2024-11-19 07:41:58.918271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.922486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.922523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:49.708 [2024-11-19 07:41:58.922534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:26:49.708 [2024-11-19 07:41:58.922541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.708 [2024-11-19 07:41:58.946107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.708 [2024-11-19 07:41:58.946283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:49.708 [2024-11-19 07:41:58.946299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.550 ms 00:26:49.708 [2024-11-19 07:41:58.946306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.970 [2024-11-19 07:41:58.969701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.970 [2024-11-19 07:41:58.969837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:49.970 [2024-11-19 07:41:58.969860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.364 ms 00:26:49.970 [2024-11-19 07:41:58.969867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.970 [2024-11-19 07:41:58.992943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.970 [2024-11-19 07:41:58.993062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:49.970 [2024-11-19 07:41:58.993076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.048 ms 00:26:49.970 [2024-11-19 07:41:58.993083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.970 [2024-11-19 07:41:59.016410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.970 [2024-11-19 07:41:59.016443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:49.970 [2024-11-19 07:41:59.016455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.266 ms 00:26:49.970 [2024-11-19 07:41:59.016463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.970 [2024-11-19 07:41:59.016500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:49.970 [2024-11-19 07:41:59.016519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:49.970 [2024-11-19 07:41:59.016529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:26:49.970 [2024-11-19 07:41:59.016537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.016998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.017006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.017014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:49.970 [2024-11-19 07:41:59.017021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:49.971 [2024-11-19 07:41:59.017313] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:49.971 [2024-11-19 07:41:59.017320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05c4e17a-a781-4bc8-81e7-361c5c102370 00:26:49.971 [2024-11-19 07:41:59.017328] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:26:49.971 [2024-11-19 07:41:59.017335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:49.971 [2024-11-19 07:41:59.017342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:49.971 [2024-11-19 07:41:59.017349] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:49.971 [2024-11-19 07:41:59.017356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:49.971 [2024-11-19 07:41:59.017364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:49.971 [2024-11-19 07:41:59.017372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:49.971 [2024-11-19 07:41:59.017384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:49.971 [2024-11-19 07:41:59.017391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:49.971 [2024-11-19 07:41:59.017398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.971 [2024-11-19 07:41:59.017405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:49.971 [2024-11-19 07:41:59.017416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:26:49.971 [2024-11-19 07:41:59.017423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.032153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.971 [2024-11-19 07:41:59.032328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:49.971 [2024-11-19 07:41:59.032348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.700 ms 00:26:49.971 [2024-11-19 07:41:59.032358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.032579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.971 [2024-11-19 07:41:59.032589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:49.971 [2024-11-19 07:41:59.032598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:26:49.971 [2024-11-19 07:41:59.032605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.070436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.070481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.971 [2024-11-19 07:41:59.070493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.070502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.070569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.070577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.971 [2024-11-19 07:41:59.070586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.070593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.070664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.070675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.971 [2024-11-19 07:41:59.070683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.070691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.070708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.070720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.971 [2024-11-19 07:41:59.070728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.070735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.151849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.151905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.971 [2024-11-19 07:41:59.151918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.151926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.971 [2024-11-19 07:41:59.184221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.971 [2024-11-19 07:41:59.184344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.971 [2024-11-19 07:41:59.184424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.971 [2024-11-19 07:41:59.184555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:49.971 [2024-11-19 07:41:59.184613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.971 [2024-11-19 07:41:59.184685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.971 [2024-11-19 07:41:59.184693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.971 [2024-11-19 07:41:59.184740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.971 [2024-11-19 07:41:59.184750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.972 [2024-11-19 07:41:59.184761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.972 [2024-11-19 07:41:59.184769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.972 [2024-11-19 07:41:59.184901] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.945 ms, result 0 00:26:50.912 00:26:50.912 00:26:50.912 07:42:00 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.522 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.522 Process with pid 76011 is not found 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76011 00:26:53.522 07:42:02 -- common/autotest_common.sh@936 -- # '[' -z 76011 ']' 00:26:53.522 07:42:02 -- common/autotest_common.sh@940 -- # kill -0 76011 00:26:53.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76011) - No such process 00:26:53.522 07:42:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76011 is not found' 00:26:53.522 07:42:02 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:53.780 Remove shared memory files 00:26:53.780 07:42:02 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:53.780 07:42:02 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:53.780 07:42:02 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:53.780 07:42:02 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:53.780 07:42:02 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:53.780 07:42:02 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:53.780 07:42:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:53.780 ************************************ 00:26:53.780 END TEST ftl_dirty_shutdown 00:26:53.780 ************************************ 00:26:53.780 00:26:53.780 real 4m24.010s 00:26:53.780 user 4m44.702s 00:26:53.780 sys 0m24.461s 00:26:53.780 07:42:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:53.780 07:42:02 -- common/autotest_common.sh@10 -- # set +x 00:26:53.780 07:42:02 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:53.780 07:42:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:53.780 07:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.780 07:42:02 -- common/autotest_common.sh@10 -- # set +x 00:26:53.780 ************************************ 00:26:53.780 START TEST ftl_upgrade_shutdown 00:26:53.780 ************************************ 00:26:53.780 07:42:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:53.780 * Looking for test storage... 00:26:54.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:54.040 07:42:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:54.040 07:42:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:54.040 07:42:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:54.040 07:42:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:54.040 07:42:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:54.040 07:42:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:54.040 07:42:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:54.040 07:42:03 -- scripts/common.sh@335 -- # IFS=.-: 00:26:54.040 07:42:03 -- scripts/common.sh@335 -- # read -ra ver1 00:26:54.040 07:42:03 -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.040 07:42:03 -- scripts/common.sh@336 -- # read -ra ver2 00:26:54.040 07:42:03 -- scripts/common.sh@337 -- # local 'op=<' 00:26:54.040 07:42:03 -- scripts/common.sh@339 -- # ver1_l=2 00:26:54.040 07:42:03 -- scripts/common.sh@340 -- # ver2_l=1 00:26:54.040 07:42:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:54.040 07:42:03 -- scripts/common.sh@343 -- # case "$op" in 00:26:54.040 07:42:03 -- scripts/common.sh@344 -- # : 1 00:26:54.040 07:42:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:54.040 07:42:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.040 07:42:03 -- scripts/common.sh@364 -- # decimal 1 00:26:54.040 07:42:03 -- scripts/common.sh@352 -- # local d=1 00:26:54.040 07:42:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.040 07:42:03 -- scripts/common.sh@354 -- # echo 1 00:26:54.040 07:42:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:54.040 07:42:03 -- scripts/common.sh@365 -- # decimal 2 00:26:54.040 07:42:03 -- scripts/common.sh@352 -- # local d=2 00:26:54.040 07:42:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.040 07:42:03 -- scripts/common.sh@354 -- # echo 2 00:26:54.040 07:42:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:54.040 07:42:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:54.040 07:42:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:54.040 07:42:03 -- scripts/common.sh@367 -- # return 0 00:26:54.040 07:42:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.040 07:42:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:54.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.040 --rc genhtml_branch_coverage=1 00:26:54.040 --rc genhtml_function_coverage=1 00:26:54.040 --rc genhtml_legend=1 00:26:54.040 --rc geninfo_all_blocks=1 00:26:54.040 --rc geninfo_unexecuted_blocks=1 00:26:54.040 00:26:54.040 ' 00:26:54.040 07:42:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:54.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.040 --rc genhtml_branch_coverage=1 00:26:54.040 --rc genhtml_function_coverage=1 00:26:54.040 --rc genhtml_legend=1 00:26:54.040 --rc geninfo_all_blocks=1 00:26:54.040 --rc geninfo_unexecuted_blocks=1 00:26:54.040 00:26:54.040 ' 00:26:54.040 07:42:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:54.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.040 --rc genhtml_branch_coverage=1 00:26:54.040 --rc genhtml_function_coverage=1 00:26:54.040 --rc genhtml_legend=1 00:26:54.040 --rc geninfo_all_blocks=1 00:26:54.040 --rc geninfo_unexecuted_blocks=1 00:26:54.040 00:26:54.040 ' 00:26:54.040 07:42:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:54.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.040 --rc genhtml_branch_coverage=1 00:26:54.040 --rc genhtml_function_coverage=1 00:26:54.040 --rc genhtml_legend=1 00:26:54.040 --rc geninfo_all_blocks=1 00:26:54.040 --rc geninfo_unexecuted_blocks=1 00:26:54.040 00:26:54.040 ' 00:26:54.040 07:42:03 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:54.040 07:42:03 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:54.040 07:42:03 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:54.040 07:42:03 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:54.040 07:42:03 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:54.040 07:42:03 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:54.040 07:42:03 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:54.040 07:42:03 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:54.040 07:42:03 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:54.040 07:42:03 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:54.040 07:42:03 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:54.040 07:42:03 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:54.040 07:42:03 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:54.040 07:42:03 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:54.041 07:42:03 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:54.041 07:42:03 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:54.041 07:42:03 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:54.041 07:42:03 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:54.041 07:42:03 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:54.041 07:42:03 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:54.041 07:42:03 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:54.041 07:42:03 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:54.041 07:42:03 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:54.041 07:42:03 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:54.041 07:42:03 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:54.041 07:42:03 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:54.041 07:42:03 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:54.041 07:42:03 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.041 07:42:03 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:54.041 07:42:03 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:54.041 07:42:03 -- ftl/common.sh@81 -- # local base_bdev= 00:26:54.041 07:42:03 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:54.041 07:42:03 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.041 07:42:03 -- ftl/common.sh@89 -- # spdk_tgt_pid=78879 00:26:54.041 07:42:03 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:54.041 07:42:03 -- ftl/common.sh@91 -- # waitforlisten 78879 00:26:54.041 07:42:03 -- common/autotest_common.sh@829 -- # '[' -z 78879 ']' 00:26:54.041 07:42:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.041 07:42:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.041 07:42:03 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:54.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.041 07:42:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.041 07:42:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.041 07:42:03 -- common/autotest_common.sh@10 -- # set +x 00:26:54.041 [2024-11-19 07:42:03.201031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:54.041 [2024-11-19 07:42:03.201312] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78879 ] 00:26:54.301 [2024-11-19 07:42:03.348834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.301 [2024-11-19 07:42:03.536781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:54.301 [2024-11-19 07:42:03.537219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.675 07:42:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.675 07:42:04 -- common/autotest_common.sh@862 -- # return 0 00:26:55.675 07:42:04 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:55.676 07:42:04 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:55.676 07:42:04 -- ftl/common.sh@99 -- # local params 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:55.676 07:42:04 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:55.676 07:42:04 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:55.676 07:42:04 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:55.676 07:42:04 -- ftl/common.sh@54 -- # local name=base 00:26:55.676 07:42:04 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:55.676 07:42:04 -- ftl/common.sh@56 -- # local size=20480 00:26:55.676 07:42:04 -- ftl/common.sh@59 -- # local base_bdev 00:26:55.676 07:42:04 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:55.934 07:42:04 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:55.934 07:42:04 -- ftl/common.sh@62 -- # local base_size 00:26:55.934 07:42:04 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:55.934 07:42:04 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:55.934 07:42:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:55.934 07:42:04 -- common/autotest_common.sh@1369 -- # local bs 00:26:55.934 07:42:04 -- common/autotest_common.sh@1370 -- # local nb 00:26:55.934 07:42:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:55.934 07:42:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:55.934 { 00:26:55.934 "name": "basen1", 00:26:55.934 "aliases": [ 00:26:55.934 "f53f755b-2b9f-4c06-84c6-f2f97f847a8d" 00:26:55.934 ], 00:26:55.934 "product_name": "NVMe disk", 00:26:55.934 "block_size": 4096, 00:26:55.934 "num_blocks": 1310720, 00:26:55.934 "uuid": "f53f755b-2b9f-4c06-84c6-f2f97f847a8d", 00:26:55.934 "assigned_rate_limits": { 00:26:55.934 "rw_ios_per_sec": 0, 00:26:55.934 "rw_mbytes_per_sec": 0, 00:26:55.934 "r_mbytes_per_sec": 0, 00:26:55.934 "w_mbytes_per_sec": 0 00:26:55.934 }, 00:26:55.934 "claimed": true, 00:26:55.934 "claim_type": "read_many_write_one", 00:26:55.934 "zoned": false, 00:26:55.934 "supported_io_types": { 00:26:55.934 "read": true, 00:26:55.934 "write": true, 00:26:55.934 "unmap": true, 00:26:55.934 "write_zeroes": true, 00:26:55.934 "flush": true, 00:26:55.934 "reset": true, 00:26:55.934 "compare": true, 00:26:55.934 "compare_and_write": false, 00:26:55.934 "abort": true, 00:26:55.934 "nvme_admin": true, 00:26:55.934 "nvme_io": true 00:26:55.934 }, 00:26:55.934 "driver_specific": { 00:26:55.934 "nvme": [ 00:26:55.934 { 00:26:55.934 "pci_address": "0000:00:07.0", 00:26:55.934 "trid": { 00:26:55.934 "trtype": "PCIe", 00:26:55.934 "traddr": "0000:00:07.0" 00:26:55.934 }, 00:26:55.934 "ctrlr_data": { 00:26:55.934 "cntlid": 0, 00:26:55.934 "vendor_id": "0x1b36", 00:26:55.934 "model_number": "QEMU NVMe Ctrl", 00:26:55.934 "serial_number": "12341", 00:26:55.934 "firmware_revision": "8.0.0", 00:26:55.934 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:55.934 "oacs": { 00:26:55.934 "security": 0, 00:26:55.934 "format": 1, 00:26:55.934 "firmware": 0, 00:26:55.934 "ns_manage": 1 00:26:55.934 }, 00:26:55.934 "multi_ctrlr": false, 00:26:55.934 "ana_reporting": false 00:26:55.934 }, 00:26:55.934 "vs": { 00:26:55.934 "nvme_version": "1.4" 00:26:55.934 }, 00:26:55.934 "ns_data": { 00:26:55.934 "id": 1, 00:26:55.934 "can_share": false 00:26:55.934 } 00:26:55.934 } 00:26:55.934 ], 00:26:55.934 "mp_policy": "active_passive" 00:26:55.934 } 00:26:55.934 } 00:26:55.934 ]' 00:26:55.934 07:42:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:55.934 07:42:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:55.934 07:42:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:56.193 07:42:05 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:56.193 07:42:05 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:56.193 07:42:05 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:56.193 07:42:05 -- ftl/common.sh@63 -- # base_size=5120 00:26:56.193 07:42:05 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:56.193 07:42:05 -- ftl/common.sh@67 -- # clear_lvols 00:26:56.193 07:42:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:56.193 07:42:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:56.193 07:42:05 -- ftl/common.sh@28 -- # stores=7c59f9be-b53d-41c9-bda1-595bf6bae969 00:26:56.193 07:42:05 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:56.193 07:42:05 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c59f9be-b53d-41c9-bda1-595bf6bae969 00:26:56.452 07:42:05 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:56.710 07:42:05 -- ftl/common.sh@68 -- # lvs=9233e873-0710-47e3-9d75-18b80f06815c 00:26:56.710 07:42:05 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 9233e873-0710-47e3-9d75-18b80f06815c 00:26:56.968 07:42:05 -- ftl/common.sh@107 -- # base_bdev=24c33f82-510d-4402-8318-c468557b7d48 00:26:56.968 07:42:05 -- ftl/common.sh@108 -- # [[ -z 24c33f82-510d-4402-8318-c468557b7d48 ]] 00:26:56.968 07:42:05 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 24c33f82-510d-4402-8318-c468557b7d48 5120 00:26:56.968 07:42:05 -- ftl/common.sh@35 -- # local name=cache 00:26:56.968 07:42:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:56.968 07:42:05 -- ftl/common.sh@37 -- # local base_bdev=24c33f82-510d-4402-8318-c468557b7d48 00:26:56.968 07:42:05 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:56.969 07:42:05 -- ftl/common.sh@41 -- # get_bdev_size 24c33f82-510d-4402-8318-c468557b7d48 00:26:56.969 07:42:05 -- common/autotest_common.sh@1367 -- # local bdev_name=24c33f82-510d-4402-8318-c468557b7d48 00:26:56.969 07:42:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:56.969 07:42:05 -- common/autotest_common.sh@1369 -- # local bs 00:26:56.969 07:42:05 -- common/autotest_common.sh@1370 -- # local nb 00:26:56.969 07:42:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24c33f82-510d-4402-8318-c468557b7d48 00:26:56.969 07:42:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:56.969 { 00:26:56.969 "name": "24c33f82-510d-4402-8318-c468557b7d48", 00:26:56.969 "aliases": [ 00:26:56.969 "lvs/basen1p0" 00:26:56.969 ], 00:26:56.969 "product_name": "Logical Volume", 00:26:56.969 "block_size": 4096, 00:26:56.969 "num_blocks": 5242880, 00:26:56.969 "uuid": "24c33f82-510d-4402-8318-c468557b7d48", 00:26:56.969 "assigned_rate_limits": { 00:26:56.969 "rw_ios_per_sec": 0, 00:26:56.969 "rw_mbytes_per_sec": 0, 00:26:56.969 "r_mbytes_per_sec": 0, 00:26:56.969 "w_mbytes_per_sec": 0 00:26:56.969 }, 00:26:56.969 "claimed": false, 00:26:56.969 "zoned": false, 00:26:56.969 "supported_io_types": { 00:26:56.969 "read": true, 00:26:56.969 "write": true, 00:26:56.969 "unmap": true, 00:26:56.969 "write_zeroes": true, 00:26:56.969 "flush": false, 00:26:56.969 "reset": true, 00:26:56.969 "compare": false, 00:26:56.969 "compare_and_write": false, 00:26:56.969 "abort": false, 00:26:56.969 "nvme_admin": false, 00:26:56.969 "nvme_io": false 00:26:56.969 }, 00:26:56.969 "driver_specific": { 00:26:56.969 "lvol": { 00:26:56.969 "lvol_store_uuid": "9233e873-0710-47e3-9d75-18b80f06815c", 00:26:56.969 "base_bdev": "basen1", 00:26:56.969 "thin_provision": true, 00:26:56.969 "snapshot": false, 00:26:56.969 "clone": false, 00:26:56.969 "esnap_clone": false 00:26:56.969 } 00:26:56.969 } 00:26:56.969 } 00:26:56.969 ]' 00:26:56.969 07:42:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:57.228 07:42:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:57.228 07:42:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:57.228 07:42:06 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:57.228 07:42:06 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:57.228 07:42:06 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:57.228 07:42:06 -- ftl/common.sh@41 -- # local base_size=1024 00:26:57.228 07:42:06 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:57.228 07:42:06 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:57.489 07:42:06 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:57.489 07:42:06 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:57.489 07:42:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:57.489 07:42:06 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:57.489 07:42:06 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:57.489 07:42:06 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 24c33f82-510d-4402-8318-c468557b7d48 -c cachen1p0 --l2p_dram_limit 2 00:26:57.749 [2024-11-19 07:42:06.885720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.885762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:57.749 [2024-11-19 07:42:06.885775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.749 [2024-11-19 07:42:06.885783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.885824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.885832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:57.749 [2024-11-19 07:42:06.885839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:57.749 [2024-11-19 07:42:06.885845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.885862] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:57.749 [2024-11-19 07:42:06.886519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:57.749 [2024-11-19 07:42:06.886541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.886547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:57.749 [2024-11-19 07:42:06.886555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.679 ms 00:26:57.749 [2024-11-19 07:42:06.886561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.886589] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9b9c7808-8c84-45dc-aad4-d7a509251050 00:26:57.749 [2024-11-19 07:42:06.887551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.887574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:57.749 [2024-11-19 07:42:06.887581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:57.749 [2024-11-19 07:42:06.887588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.892234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.892343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:57.749 [2024-11-19 07:42:06.892355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.591 ms 00:26:57.749 [2024-11-19 07:42:06.892362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.892394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.892403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:57.749 [2024-11-19 07:42:06.892409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:57.749 [2024-11-19 07:42:06.892418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.892452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.892462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:57.749 [2024-11-19 07:42:06.892469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:57.749 [2024-11-19 07:42:06.892476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.892493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:57.749 [2024-11-19 07:42:06.895394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.895484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:57.749 [2024-11-19 07:42:06.895498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.903 ms 00:26:57.749 [2024-11-19 07:42:06.895504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.749 [2024-11-19 07:42:06.895528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.749 [2024-11-19 07:42:06.895535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:57.749 [2024-11-19 07:42:06.895542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.750 [2024-11-19 07:42:06.895548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.750 [2024-11-19 07:42:06.895562] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:57.750 [2024-11-19 07:42:06.895654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:57.750 [2024-11-19 07:42:06.895666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:57.750 [2024-11-19 07:42:06.895674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:57.750 [2024-11-19 07:42:06.895682] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:57.750 [2024-11-19 07:42:06.895689] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:57.750 [2024-11-19 07:42:06.895698] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:57.750 [2024-11-19 07:42:06.895704] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:57.750 [2024-11-19 07:42:06.895711] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:57.750 [2024-11-19 07:42:06.895717] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:57.750 [2024-11-19 07:42:06.895723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.750 [2024-11-19 07:42:06.895734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:57.750 [2024-11-19 07:42:06.895741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:26:57.750 [2024-11-19 07:42:06.895746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.750 [2024-11-19 07:42:06.895795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.750 [2024-11-19 07:42:06.895801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:57.750 [2024-11-19 07:42:06.895807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:57.750 [2024-11-19 07:42:06.895814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.750 [2024-11-19 07:42:06.895870] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:57.750 [2024-11-19 07:42:06.895877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:57.750 [2024-11-19 07:42:06.895884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.750 [2024-11-19 07:42:06.895890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:57.750 [2024-11-19 07:42:06.895901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:57.750 [2024-11-19 07:42:06.895912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:57.750 [2024-11-19 07:42:06.895918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:57.750 [2024-11-19 07:42:06.895923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:57.750 [2024-11-19 07:42:06.895934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:57.750 [2024-11-19 07:42:06.895942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:57.750 [2024-11-19 07:42:06.895955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:57.750 [2024-11-19 07:42:06.895972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:57.750 [2024-11-19 07:42:06.895979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.895984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:57.750 [2024-11-19 07:42:06.895990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:57.750 [2024-11-19 07:42:06.895995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:57.750 [2024-11-19 07:42:06.896006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:57.750 [2024-11-19 07:42:06.896023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:57.750 [2024-11-19 07:42:06.896038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:57.750 [2024-11-19 07:42:06.896056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:57.750 [2024-11-19 07:42:06.896071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.896082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:57.750 [2024-11-19 07:42:06.896089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.896100] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:57.750 [2024-11-19 07:42:06.896106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:57.750 [2024-11-19 07:42:06.896113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.750 [2024-11-19 07:42:06.896126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:57.750 [2024-11-19 07:42:06.896131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:57.750 [2024-11-19 07:42:06.896138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:57.750 [2024-11-19 07:42:06.896143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:57.750 [2024-11-19 07:42:06.896151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:57.750 [2024-11-19 07:42:06.896156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:57.750 [2024-11-19 07:42:06.896163] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:57.750 [2024-11-19 07:42:06.896170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:57.750 [2024-11-19 07:42:06.896202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:57.750 [2024-11-19 07:42:06.896222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:57.750 [2024-11-19 07:42:06.896227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:57.750 [2024-11-19 07:42:06.896234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:57.750 [2024-11-19 07:42:06.896239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:57.750 [2024-11-19 07:42:06.896273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:57.750 [2024-11-19 07:42:06.896278] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:57.750 [2024-11-19 07:42:06.896286] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896292] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.750 [2024-11-19 07:42:06.896299] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:57.750 [2024-11-19 07:42:06.896304] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:57.750 [2024-11-19 07:42:06.896311] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:57.750 [2024-11-19 07:42:06.896317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.750 [2024-11-19 07:42:06.896324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:57.750 [2024-11-19 07:42:06.896330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:26:57.750 [2024-11-19 07:42:06.896336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.750 [2024-11-19 07:42:06.907901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.750 [2024-11-19 07:42:06.907932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:57.750 [2024-11-19 07:42:06.907940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.535 ms 00:26:57.750 [2024-11-19 07:42:06.907947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.907978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.907987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:57.751 [2024-11-19 07:42:06.907994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:57.751 [2024-11-19 07:42:06.908001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.931844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.931873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:57.751 [2024-11-19 07:42:06.931882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.809 ms 00:26:57.751 [2024-11-19 07:42:06.931889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.931914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.931923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:57.751 [2024-11-19 07:42:06.931929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:57.751 [2024-11-19 07:42:06.931937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.932271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.932289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:57.751 [2024-11-19 07:42:06.932296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:26:57.751 [2024-11-19 07:42:06.932304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.932338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.932347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:57.751 [2024-11-19 07:42:06.932353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:57.751 [2024-11-19 07:42:06.932359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.944247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.944272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:57.751 [2024-11-19 07:42:06.944280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.874 ms 00:26:57.751 [2024-11-19 07:42:06.944287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.953152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:57.751 [2024-11-19 07:42:06.953860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.953883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:57.751 [2024-11-19 07:42:06.953891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.517 ms 00:26:57.751 [2024-11-19 07:42:06.953897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.980264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.751 [2024-11-19 07:42:06.980291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:57.751 [2024-11-19 07:42:06.980301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.348 ms 00:26:57.751 [2024-11-19 07:42:06.980307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.751 [2024-11-19 07:42:06.980327] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:57.751 [2024-11-19 07:42:06.980334] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:27:01.036 [2024-11-19 07:42:09.777574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.777629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:01.036 [2024-11-19 07:42:09.777647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2797.232 ms 00:27:01.036 [2024-11-19 07:42:09.777655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.777750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.777760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:01.036 [2024-11-19 07:42:09.777773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:01.036 [2024-11-19 07:42:09.777780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.801709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.801844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:01.036 [2024-11-19 07:42:09.801866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.883 ms 00:27:01.036 [2024-11-19 07:42:09.801874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.825108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.825251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:01.036 [2024-11-19 07:42:09.825279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.195 ms 00:27:01.036 [2024-11-19 07:42:09.825289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.826125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.826166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:01.036 [2024-11-19 07:42:09.826195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:27:01.036 [2024-11-19 07:42:09.826204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.889337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.889384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:01.036 [2024-11-19 07:42:09.889398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 63.085 ms 00:27:01.036 [2024-11-19 07:42:09.889406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.914032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.914068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:01.036 [2024-11-19 07:42:09.914080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.586 ms 00:27:01.036 [2024-11-19 07:42:09.914088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.915283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.915311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:01.036 [2024-11-19 07:42:09.915324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:27:01.036 [2024-11-19 07:42:09.915331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.939647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.939678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:01.036 [2024-11-19 07:42:09.939690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.281 ms 00:27:01.036 [2024-11-19 07:42:09.939697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.939735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.939744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:01.036 [2024-11-19 07:42:09.939754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:01.036 [2024-11-19 07:42:09.939760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.939839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.036 [2024-11-19 07:42:09.939848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:01.036 [2024-11-19 07:42:09.939858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:01.036 [2024-11-19 07:42:09.939865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.036 [2024-11-19 07:42:09.940693] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3054.552 ms, result 0 00:27:01.036 { 00:27:01.036 "name": "ftl", 00:27:01.036 "uuid": "9b9c7808-8c84-45dc-aad4-d7a509251050" 00:27:01.036 } 00:27:01.036 07:42:09 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:01.036 [2024-11-19 07:42:10.216413] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.036 07:42:10 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:01.295 07:42:10 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:01.556 [2024-11-19 07:42:10.584735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:01.556 07:42:10 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:01.556 [2024-11-19 07:42:10.777326] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:01.556 07:42:10 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:02.125 Fill FTL, iteration 1 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:02.125 07:42:11 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:02.125 07:42:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:02.125 07:42:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:02.125 07:42:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:02.125 07:42:11 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:02.125 07:42:11 -- ftl/common.sh@163 -- # spdk_ini_pid=79003 00:27:02.125 07:42:11 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:02.125 07:42:11 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:02.125 07:42:11 -- ftl/common.sh@165 -- # waitforlisten 79003 /var/tmp/spdk.tgt.sock 00:27:02.125 07:42:11 -- common/autotest_common.sh@829 -- # '[' -z 79003 ']' 00:27:02.125 07:42:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:02.125 07:42:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.125 07:42:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:02.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:02.125 07:42:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.125 07:42:11 -- common/autotest_common.sh@10 -- # set +x 00:27:02.125 [2024-11-19 07:42:11.182866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:02.125 [2024-11-19 07:42:11.183473] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79003 ] 00:27:02.125 [2024-11-19 07:42:11.332305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.384 [2024-11-19 07:42:11.513126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:02.384 [2024-11-19 07:42:11.513490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.762 07:42:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.762 07:42:12 -- common/autotest_common.sh@862 -- # return 0 00:27:03.762 07:42:12 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:03.762 ftln1 00:27:03.762 07:42:12 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:03.762 07:42:12 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:04.024 07:42:13 -- ftl/common.sh@173 -- # echo ']}' 00:27:04.024 07:42:13 -- ftl/common.sh@176 -- # killprocess 79003 00:27:04.024 07:42:13 -- common/autotest_common.sh@936 -- # '[' -z 79003 ']' 00:27:04.024 07:42:13 -- common/autotest_common.sh@940 -- # kill -0 79003 00:27:04.024 07:42:13 -- common/autotest_common.sh@941 -- # uname 00:27:04.024 07:42:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.024 07:42:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79003 00:27:04.024 killing process with pid 79003 00:27:04.024 07:42:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:04.024 07:42:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:04.024 07:42:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79003' 00:27:04.024 07:42:13 -- common/autotest_common.sh@955 -- # kill 79003 00:27:04.024 07:42:13 -- common/autotest_common.sh@960 -- # wait 79003 00:27:05.410 07:42:14 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:05.411 07:42:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:05.411 [2024-11-19 07:42:14.505365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:05.411 [2024-11-19 07:42:14.505591] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79051 ] 00:27:05.411 [2024-11-19 07:42:14.653887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.675 [2024-11-19 07:42:14.828693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.067  [2024-11-19T07:42:17.258Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-19T07:42:18.199Z] Copying: 424/1024 [MB] (212 MBps) [2024-11-19T07:42:19.584Z] Copying: 682/1024 [MB] (258 MBps) [2024-11-19T07:42:19.584Z] Copying: 948/1024 [MB] (266 MBps) [2024-11-19T07:42:20.190Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:27:10.940 00:27:10.940 Calculate MD5 checksum, iteration 1 00:27:10.940 07:42:20 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:10.940 07:42:20 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:10.940 07:42:20 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:10.940 07:42:20 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.940 07:42:20 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.940 07:42:20 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.940 07:42:20 -- ftl/common.sh@154 -- # return 0 00:27:10.940 07:42:20 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:10.940 [2024-11-19 07:42:20.166532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:10.940 [2024-11-19 07:42:20.166640] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79111 ] 00:27:11.201 [2024-11-19 07:42:20.314066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.462 [2024-11-19 07:42:20.491419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.845  [2024-11-19T07:42:22.668Z] Copying: 669/1024 [MB] (669 MBps) [2024-11-19T07:42:23.239Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:27:13.989 00:27:13.989 07:42:22 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:13.989 07:42:22 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:15.900 Fill FTL, iteration 2 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b49e89b6ae218d4ab2e61ab168eecdbb 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:15.900 07:42:25 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.900 07:42:25 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.900 07:42:25 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.900 07:42:25 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.900 07:42:25 -- ftl/common.sh@154 -- # return 0 00:27:15.900 07:42:25 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.900 [2024-11-19 07:42:25.066636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:15.900 [2024-11-19 07:42:25.066865] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79174 ] 00:27:16.158 [2024-11-19 07:42:25.206683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.158 [2024-11-19 07:42:25.382086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.537  [2024-11-19T07:42:27.729Z] Copying: 218/1024 [MB] (218 MBps) [2024-11-19T07:42:29.108Z] Copying: 476/1024 [MB] (258 MBps) [2024-11-19T07:42:30.045Z] Copying: 758/1024 [MB] (282 MBps) [2024-11-19T07:42:30.611Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:27:21.361 00:27:21.361 07:42:30 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:21.361 07:42:30 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:21.361 Calculate MD5 checksum, iteration 2 00:27:21.361 07:42:30 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.361 07:42:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.361 07:42:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.361 07:42:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.361 07:42:30 -- ftl/common.sh@154 -- # return 0 00:27:21.361 07:42:30 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.361 [2024-11-19 07:42:30.409596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:21.361 [2024-11-19 07:42:30.409839] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79228 ] 00:27:21.361 [2024-11-19 07:42:30.556570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.619 [2024-11-19 07:42:30.695144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.995  [2024-11-19T07:42:32.812Z] Copying: 679/1024 [MB] (679 MBps) [2024-11-19T07:42:33.746Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:27:24.496 00:27:24.496 07:42:33 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:24.496 07:42:33 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d1e05071c43a0237c319c2e942004150 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:27.057 [2024-11-19 07:42:35.822893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.057 [2024-11-19 07:42:35.823066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.057 [2024-11-19 07:42:35.823082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:27.057 [2024-11-19 07:42:35.823092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.057 [2024-11-19 07:42:35.823121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.057 [2024-11-19 07:42:35.823128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.057 [2024-11-19 07:42:35.823134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:27.057 [2024-11-19 07:42:35.823140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.057 [2024-11-19 07:42:35.823155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.057 [2024-11-19 07:42:35.823162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.057 [2024-11-19 07:42:35.823172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.057 [2024-11-19 07:42:35.823195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.057 [2024-11-19 07:42:35.823247] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.345 ms, result 0 00:27:27.057 true 00:27:27.057 07:42:35 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:27.057 { 00:27:27.057 "name": "ftl", 00:27:27.057 "properties": [ 00:27:27.057 { 00:27:27.057 "name": "superblock_version", 00:27:27.057 "value": 5, 00:27:27.057 "read-only": true 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "name": "base_device", 00:27:27.057 "bands": [ 00:27:27.057 { 00:27:27.057 "id": 0, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 1, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 2, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 3, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 4, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 5, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 6, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 7, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 8, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 9, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 10, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 11, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 12, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 13, 00:27:27.057 "state": "FREE", 00:27:27.057 "validity": 0.0 00:27:27.057 }, 00:27:27.057 { 00:27:27.057 "id": 14, 00:27:27.057 "state": "FREE", 00:27:27.058 "validity": 0.0 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 15, 00:27:27.058 "state": "FREE", 00:27:27.058 "validity": 0.0 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 16, 00:27:27.058 "state": "FREE", 00:27:27.058 "validity": 0.0 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 17, 00:27:27.058 "state": "FREE", 00:27:27.058 "validity": 0.0 00:27:27.058 } 00:27:27.058 ], 00:27:27.058 "read-only": true 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "name": "cache_device", 00:27:27.058 "type": "bdev", 00:27:27.058 "chunks": [ 00:27:27.058 { 00:27:27.058 "id": 0, 00:27:27.058 "state": "CLOSED", 00:27:27.058 "utilization": 1.0 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 1, 00:27:27.058 "state": "CLOSED", 00:27:27.058 "utilization": 1.0 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 2, 00:27:27.058 "state": "OPEN", 00:27:27.058 "utilization": 0.001953125 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "id": 3, 00:27:27.058 "state": "OPEN", 00:27:27.058 "utilization": 0.0 00:27:27.058 } 00:27:27.058 ], 00:27:27.058 "read-only": true 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "name": "verbose_mode", 00:27:27.058 "value": true, 00:27:27.058 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:27.058 }, 00:27:27.058 { 00:27:27.058 "name": "prep_upgrade_on_shutdown", 00:27:27.058 "value": false, 00:27:27.058 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:27.058 } 00:27:27.058 ] 00:27:27.058 } 00:27:27.058 07:42:36 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:27.058 [2024-11-19 07:42:36.206622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.058 [2024-11-19 07:42:36.206755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.058 [2024-11-19 07:42:36.206806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:27.058 [2024-11-19 07:42:36.206824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.058 [2024-11-19 07:42:36.206857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.058 [2024-11-19 07:42:36.206874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.058 [2024-11-19 07:42:36.206889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:27.058 [2024-11-19 07:42:36.206903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.058 [2024-11-19 07:42:36.206926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.058 [2024-11-19 07:42:36.206941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.058 [2024-11-19 07:42:36.206956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.058 [2024-11-19 07:42:36.207003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.058 [2024-11-19 07:42:36.207063] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.428 ms, result 0 00:27:27.058 true 00:27:27.058 07:42:36 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:27.058 07:42:36 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:27.058 07:42:36 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:27.317 07:42:36 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:27.317 07:42:36 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:27.317 07:42:36 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:27.317 [2024-11-19 07:42:36.546873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.317 [2024-11-19 07:42:36.546907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.317 [2024-11-19 07:42:36.546916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:27.317 [2024-11-19 07:42:36.546922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.317 [2024-11-19 07:42:36.546938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.317 [2024-11-19 07:42:36.546944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.317 [2024-11-19 07:42:36.546950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.317 [2024-11-19 07:42:36.546955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.317 [2024-11-19 07:42:36.546970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.317 [2024-11-19 07:42:36.546975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.317 [2024-11-19 07:42:36.546981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.317 [2024-11-19 07:42:36.546986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.317 [2024-11-19 07:42:36.547028] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.146 ms, result 0 00:27:27.317 true 00:27:27.576 07:42:36 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:27.576 { 00:27:27.576 "name": "ftl", 00:27:27.576 "properties": [ 00:27:27.576 { 00:27:27.576 "name": "superblock_version", 00:27:27.576 "value": 5, 00:27:27.576 "read-only": true 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "name": "base_device", 00:27:27.576 "bands": [ 00:27:27.576 { 00:27:27.576 "id": 0, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 1, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 2, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 3, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 4, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 5, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 6, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 7, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 8, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 9, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 10, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 11, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 12, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 13, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 14, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 15, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 16, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 17, 00:27:27.576 "state": "FREE", 00:27:27.576 "validity": 0.0 00:27:27.576 } 00:27:27.576 ], 00:27:27.576 "read-only": true 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "name": "cache_device", 00:27:27.576 "type": "bdev", 00:27:27.576 "chunks": [ 00:27:27.576 { 00:27:27.576 "id": 0, 00:27:27.576 "state": "CLOSED", 00:27:27.576 "utilization": 1.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 1, 00:27:27.576 "state": "CLOSED", 00:27:27.576 "utilization": 1.0 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 2, 00:27:27.576 "state": "OPEN", 00:27:27.576 "utilization": 0.001953125 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "id": 3, 00:27:27.576 "state": "OPEN", 00:27:27.576 "utilization": 0.0 00:27:27.576 } 00:27:27.576 ], 00:27:27.576 "read-only": true 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "name": "verbose_mode", 00:27:27.576 "value": true, 00:27:27.576 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:27.576 }, 00:27:27.576 { 00:27:27.576 "name": "prep_upgrade_on_shutdown", 00:27:27.576 "value": true, 00:27:27.576 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:27.576 } 00:27:27.576 ] 00:27:27.576 } 00:27:27.576 07:42:36 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:27.576 07:42:36 -- ftl/common.sh@130 -- # [[ -n 78879 ]] 00:27:27.576 07:42:36 -- ftl/common.sh@131 -- # killprocess 78879 00:27:27.576 07:42:36 -- common/autotest_common.sh@936 -- # '[' -z 78879 ']' 00:27:27.576 07:42:36 -- common/autotest_common.sh@940 -- # kill -0 78879 00:27:27.576 07:42:36 -- common/autotest_common.sh@941 -- # uname 00:27:27.576 07:42:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:27.576 07:42:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78879 00:27:27.576 killing process with pid 78879 00:27:27.576 07:42:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:27.576 07:42:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:27.576 07:42:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78879' 00:27:27.576 07:42:36 -- common/autotest_common.sh@955 -- # kill 78879 00:27:27.576 07:42:36 -- common/autotest_common.sh@960 -- # wait 78879 00:27:28.143 [2024-11-19 07:42:37.319929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:28.143 [2024-11-19 07:42:37.331458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.143 [2024-11-19 07:42:37.331491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:28.143 [2024-11-19 07:42:37.331500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:28.143 [2024-11-19 07:42:37.331507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.143 [2024-11-19 07:42:37.331522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:28.143 [2024-11-19 07:42:37.333514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.143 [2024-11-19 07:42:37.333537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:28.143 [2024-11-19 07:42:37.333545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.981 ms 00:27:28.143 [2024-11-19 07:42:37.333552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.893030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.893075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:34.703 [2024-11-19 07:42:43.893087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6559.426 ms 00:27:34.703 [2024-11-19 07:42:43.893093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.894243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.894370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:34.703 [2024-11-19 07:42:43.894384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:27:34.703 [2024-11-19 07:42:43.894389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.895244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.895256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:34.703 [2024-11-19 07:42:43.895264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.833 ms 00:27:34.703 [2024-11-19 07:42:43.895269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.902813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.902836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:34.703 [2024-11-19 07:42:43.902843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.511 ms 00:27:34.703 [2024-11-19 07:42:43.902849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.907721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.907746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:34.703 [2024-11-19 07:42:43.907754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.848 ms 00:27:34.703 [2024-11-19 07:42:43.907761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.907805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.907812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:34.703 [2024-11-19 07:42:43.907818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:34.703 [2024-11-19 07:42:43.907827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.914776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.914806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:34.703 [2024-11-19 07:42:43.914813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.937 ms 00:27:34.703 [2024-11-19 07:42:43.914818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.922055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.922153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:34.703 [2024-11-19 07:42:43.922164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.214 ms 00:27:34.703 [2024-11-19 07:42:43.922170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.929338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.929360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:34.703 [2024-11-19 07:42:43.929366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.133 ms 00:27:34.703 [2024-11-19 07:42:43.929371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.936332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.936423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:34.703 [2024-11-19 07:42:43.936433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.916 ms 00:27:34.703 [2024-11-19 07:42:43.936438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.936459] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:34.703 [2024-11-19 07:42:43.936469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:34.703 [2024-11-19 07:42:43.936477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:34.703 [2024-11-19 07:42:43.936484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:34.703 [2024-11-19 07:42:43.936490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.703 [2024-11-19 07:42:43.936582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:34.703 [2024-11-19 07:42:43.936588] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9b9c7808-8c84-45dc-aad4-d7a509251050 00:27:34.703 [2024-11-19 07:42:43.936594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:34.703 [2024-11-19 07:42:43.936600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:34.703 [2024-11-19 07:42:43.936605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:34.703 [2024-11-19 07:42:43.936611] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:34.703 [2024-11-19 07:42:43.936616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:34.703 [2024-11-19 07:42:43.936622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:34.703 [2024-11-19 07:42:43.936629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:34.703 [2024-11-19 07:42:43.936633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:34.703 [2024-11-19 07:42:43.936638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:34.703 [2024-11-19 07:42:43.936644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.936651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:34.703 [2024-11-19 07:42:43.936657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:27:34.703 [2024-11-19 07:42:43.936663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.703 [2024-11-19 07:42:43.946300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.703 [2024-11-19 07:42:43.946322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:34.703 [2024-11-19 07:42:43.946330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.624 ms 00:27:34.704 [2024-11-19 07:42:43.946336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.704 [2024-11-19 07:42:43.946483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.704 [2024-11-19 07:42:43.946490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:34.704 [2024-11-19 07:42:43.946496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:27:34.704 [2024-11-19 07:42:43.946501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.962 [2024-11-19 07:42:43.981168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.962 [2024-11-19 07:42:43.981202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:34.962 [2024-11-19 07:42:43.981210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.962 [2024-11-19 07:42:43.981219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.962 [2024-11-19 07:42:43.981241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.962 [2024-11-19 07:42:43.981247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:34.962 [2024-11-19 07:42:43.981253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.962 [2024-11-19 07:42:43.981258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.962 [2024-11-19 07:42:43.981301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.962 [2024-11-19 07:42:43.981307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:34.962 [2024-11-19 07:42:43.981313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.962 [2024-11-19 07:42:43.981319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.962 [2024-11-19 07:42:43.981333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.962 [2024-11-19 07:42:43.981339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:34.962 [2024-11-19 07:42:43.981344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.962 [2024-11-19 07:42:43.981350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.962 [2024-11-19 07:42:44.039757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.039883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:34.963 [2024-11-19 07:42:44.039897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.039904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:34.963 [2024-11-19 07:42:44.062444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:34.963 [2024-11-19 07:42:44.062503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:34.963 [2024-11-19 07:42:44.062555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:34.963 [2024-11-19 07:42:44.062639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:34.963 [2024-11-19 07:42:44.062680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:34.963 [2024-11-19 07:42:44.062726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:34.963 [2024-11-19 07:42:44.062774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:34.963 [2024-11-19 07:42:44.062780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:34.963 [2024-11-19 07:42:44.062786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.963 [2024-11-19 07:42:44.062872] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6731.372 ms, result 0 00:27:41.539 07:42:49 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:41.539 07:42:49 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:41.539 07:42:49 -- ftl/common.sh@81 -- # local base_bdev= 00:27:41.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.539 07:42:49 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:41.539 07:42:49 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:41.539 07:42:49 -- ftl/common.sh@89 -- # spdk_tgt_pid=79453 00:27:41.539 07:42:49 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:41.539 07:42:49 -- ftl/common.sh@91 -- # waitforlisten 79453 00:27:41.539 07:42:49 -- common/autotest_common.sh@829 -- # '[' -z 79453 ']' 00:27:41.539 07:42:49 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:41.539 07:42:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.539 07:42:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.539 07:42:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.539 07:42:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.539 07:42:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.539 [2024-11-19 07:42:50.042098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:41.540 [2024-11-19 07:42:50.042510] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79453 ] 00:27:41.540 [2024-11-19 07:42:50.195048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.540 [2024-11-19 07:42:50.402259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:41.540 [2024-11-19 07:42:50.402701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.113 [2024-11-19 07:42:51.100620] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.113 [2024-11-19 07:42:51.100926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.113 [2024-11-19 07:42:51.244307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.244524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:42.113 [2024-11-19 07:42:51.244965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:42.113 [2024-11-19 07:42:51.245024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.245226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.245774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.113 [2024-11-19 07:42:51.245899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:27:42.113 [2024-11-19 07:42:51.245926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.246031] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:42.113 [2024-11-19 07:42:51.246858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:42.113 [2024-11-19 07:42:51.246983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.246995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.113 [2024-11-19 07:42:51.247005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.963 ms 00:27:42.113 [2024-11-19 07:42:51.247012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.248782] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:42.113 [2024-11-19 07:42:51.263297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.263494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:42.113 [2024-11-19 07:42:51.263517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.518 ms 00:27:42.113 [2024-11-19 07:42:51.263526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.263717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.263749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:42.113 [2024-11-19 07:42:51.263760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:42.113 [2024-11-19 07:42:51.263768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.272233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.272278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.113 [2024-11-19 07:42:51.272288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.372 ms 00:27:42.113 [2024-11-19 07:42:51.272300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.272344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.272352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.113 [2024-11-19 07:42:51.272360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:42.113 [2024-11-19 07:42:51.272367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.272414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.272423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:42.113 [2024-11-19 07:42:51.272432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:42.113 [2024-11-19 07:42:51.272440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.272471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:42.113 [2024-11-19 07:42:51.276797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.276838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.113 [2024-11-19 07:42:51.276854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.337 ms 00:27:42.113 [2024-11-19 07:42:51.276861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.276907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.113 [2024-11-19 07:42:51.276915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:42.113 [2024-11-19 07:42:51.276924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:42.113 [2024-11-19 07:42:51.276932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.113 [2024-11-19 07:42:51.276984] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:42.113 [2024-11-19 07:42:51.277008] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:42.113 [2024-11-19 07:42:51.277045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:42.113 [2024-11-19 07:42:51.277064] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:42.113 [2024-11-19 07:42:51.277141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:42.114 [2024-11-19 07:42:51.277152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:42.114 [2024-11-19 07:42:51.277162] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:42.114 [2024-11-19 07:42:51.277172] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277203] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277212] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:42.114 [2024-11-19 07:42:51.277224] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:42.114 [2024-11-19 07:42:51.277231] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:42.114 [2024-11-19 07:42:51.277242] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:42.114 [2024-11-19 07:42:51.277250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.114 [2024-11-19 07:42:51.277257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:42.114 [2024-11-19 07:42:51.277265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:27:42.114 [2024-11-19 07:42:51.277272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.114 [2024-11-19 07:42:51.277336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.114 [2024-11-19 07:42:51.277344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:42.114 [2024-11-19 07:42:51.277352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:42.114 [2024-11-19 07:42:51.277359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.114 [2024-11-19 07:42:51.277438] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:42.114 [2024-11-19 07:42:51.277449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:42.114 [2024-11-19 07:42:51.277457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:42.114 [2024-11-19 07:42:51.277480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:42.114 [2024-11-19 07:42:51.277495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:42.114 [2024-11-19 07:42:51.277503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:42.114 [2024-11-19 07:42:51.277510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:42.114 [2024-11-19 07:42:51.277524] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:42.114 [2024-11-19 07:42:51.277534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:42.114 [2024-11-19 07:42:51.277549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:42.114 [2024-11-19 07:42:51.277594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:42.114 [2024-11-19 07:42:51.277601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:42.114 [2024-11-19 07:42:51.277614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:42.114 [2024-11-19 07:42:51.277621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:42.114 [2024-11-19 07:42:51.277639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:42.114 [2024-11-19 07:42:51.277659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:42.114 [2024-11-19 07:42:51.277679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:42.114 [2024-11-19 07:42:51.277699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:42.114 [2024-11-19 07:42:51.277719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277732] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:42.114 [2024-11-19 07:42:51.277739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277751] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:42.114 [2024-11-19 07:42:51.277759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:42.114 [2024-11-19 07:42:51.277766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.114 [2024-11-19 07:42:51.277782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:42.114 [2024-11-19 07:42:51.277790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:42.114 [2024-11-19 07:42:51.277796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:42.114 [2024-11-19 07:42:51.277803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:42.114 [2024-11-19 07:42:51.277810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:42.114 [2024-11-19 07:42:51.277817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:42.114 [2024-11-19 07:42:51.277825] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:42.114 [2024-11-19 07:42:51.277834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:42.114 [2024-11-19 07:42:51.277853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:42.114 [2024-11-19 07:42:51.277879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:42.114 [2024-11-19 07:42:51.277894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:42.114 [2024-11-19 07:42:51.277901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:42.114 [2024-11-19 07:42:51.277908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:42.114 [2024-11-19 07:42:51.277945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:42.114 [2024-11-19 07:42:51.277952] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:42.114 [2024-11-19 07:42:51.277960] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277968] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.114 [2024-11-19 07:42:51.277976] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:42.114 [2024-11-19 07:42:51.277983] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:42.114 [2024-11-19 07:42:51.277990] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:42.114 [2024-11-19 07:42:51.277998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.114 [2024-11-19 07:42:51.278005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:42.114 [2024-11-19 07:42:51.278014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:27:42.114 [2024-11-19 07:42:51.278021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.114 [2024-11-19 07:42:51.296836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.114 [2024-11-19 07:42:51.296890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:42.114 [2024-11-19 07:42:51.296903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.766 ms 00:27:42.114 [2024-11-19 07:42:51.296913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.114 [2024-11-19 07:42:51.296961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.114 [2024-11-19 07:42:51.296970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:42.114 [2024-11-19 07:42:51.296979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:42.114 [2024-11-19 07:42:51.296986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.115 [2024-11-19 07:42:51.332728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.115 [2024-11-19 07:42:51.332774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:42.115 [2024-11-19 07:42:51.332786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.676 ms 00:27:42.115 [2024-11-19 07:42:51.332794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.115 [2024-11-19 07:42:51.332837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.115 [2024-11-19 07:42:51.332845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:42.115 [2024-11-19 07:42:51.332855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:42.115 [2024-11-19 07:42:51.332862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.115 [2024-11-19 07:42:51.333501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.115 [2024-11-19 07:42:51.333542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:42.115 [2024-11-19 07:42:51.333553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:27:42.115 [2024-11-19 07:42:51.333560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.115 [2024-11-19 07:42:51.333624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.115 [2024-11-19 07:42:51.333633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:42.115 [2024-11-19 07:42:51.333642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:42.115 [2024-11-19 07:42:51.333650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.115 [2024-11-19 07:42:51.352338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.115 [2024-11-19 07:42:51.352383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:42.115 [2024-11-19 07:42:51.352395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.662 ms 00:27:42.115 [2024-11-19 07:42:51.352403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.366924] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:42.377 [2024-11-19 07:42:51.367119] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:42.377 [2024-11-19 07:42:51.367138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.367147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:42.377 [2024-11-19 07:42:51.367157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.615 ms 00:27:42.377 [2024-11-19 07:42:51.367176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.382809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.382860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:42.377 [2024-11-19 07:42:51.382872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.561 ms 00:27:42.377 [2024-11-19 07:42:51.382880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.396113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.396163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:42.377 [2024-11-19 07:42:51.396176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.172 ms 00:27:42.377 [2024-11-19 07:42:51.396202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.409311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.409499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:42.377 [2024-11-19 07:42:51.409521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.055 ms 00:27:42.377 [2024-11-19 07:42:51.409528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.409973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.409988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:42.377 [2024-11-19 07:42:51.409998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:27:42.377 [2024-11-19 07:42:51.410005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.477651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.477712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:42.377 [2024-11-19 07:42:51.477727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 67.626 ms 00:27:42.377 [2024-11-19 07:42:51.477735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.489498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:42.377 [2024-11-19 07:42:51.490567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.490613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:42.377 [2024-11-19 07:42:51.490625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.766 ms 00:27:42.377 [2024-11-19 07:42:51.490640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.490718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.490729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:42.377 [2024-11-19 07:42:51.490738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:42.377 [2024-11-19 07:42:51.490746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.490806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.490817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:42.377 [2024-11-19 07:42:51.490826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:42.377 [2024-11-19 07:42:51.490833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.492288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.492469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:42.377 [2024-11-19 07:42:51.492490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.429 ms 00:27:42.377 [2024-11-19 07:42:51.492497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.492544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.492553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:42.377 [2024-11-19 07:42:51.492562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:42.377 [2024-11-19 07:42:51.492569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.492611] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:42.377 [2024-11-19 07:42:51.492621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.492632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:42.377 [2024-11-19 07:42:51.492640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:42.377 [2024-11-19 07:42:51.492648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.519146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.519212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:42.377 [2024-11-19 07:42:51.519225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.474 ms 00:27:42.377 [2024-11-19 07:42:51.519233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.519332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.377 [2024-11-19 07:42:51.519363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:42.377 [2024-11-19 07:42:51.519373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:42.377 [2024-11-19 07:42:51.519381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.377 [2024-11-19 07:42:51.520809] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 276.026 ms, result 0 00:27:42.377 [2024-11-19 07:42:51.535565] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.377 [2024-11-19 07:42:51.551578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:42.377 [2024-11-19 07:42:51.559893] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.322 07:42:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.322 07:42:52 -- common/autotest_common.sh@862 -- # return 0 00:27:43.322 07:42:52 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:43.322 07:42:52 -- ftl/common.sh@95 -- # return 0 00:27:43.322 07:42:52 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:43.322 [2024-11-19 07:42:52.373479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.322 [2024-11-19 07:42:52.373515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:43.322 [2024-11-19 07:42:52.373526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:43.322 [2024-11-19 07:42:52.373532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.322 [2024-11-19 07:42:52.373550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.322 [2024-11-19 07:42:52.373557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:43.322 [2024-11-19 07:42:52.373563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.322 [2024-11-19 07:42:52.373579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.322 [2024-11-19 07:42:52.373594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.322 [2024-11-19 07:42:52.373600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:43.322 [2024-11-19 07:42:52.373606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.322 [2024-11-19 07:42:52.373612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.322 [2024-11-19 07:42:52.373657] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:27:43.322 true 00:27:43.322 07:42:52 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.322 { 00:27:43.322 "name": "ftl", 00:27:43.322 "properties": [ 00:27:43.322 { 00:27:43.322 "name": "superblock_version", 00:27:43.322 "value": 5, 00:27:43.322 "read-only": true 00:27:43.322 }, 00:27:43.322 { 00:27:43.322 "name": "base_device", 00:27:43.322 "bands": [ 00:27:43.322 { 00:27:43.322 "id": 0, 00:27:43.322 "state": "CLOSED", 00:27:43.322 "validity": 1.0 00:27:43.322 }, 00:27:43.322 { 00:27:43.322 "id": 1, 00:27:43.322 "state": "CLOSED", 00:27:43.322 "validity": 1.0 00:27:43.322 }, 00:27:43.322 { 00:27:43.322 "id": 2, 00:27:43.322 "state": "CLOSED", 00:27:43.322 "validity": 0.007843137254901933 00:27:43.322 }, 00:27:43.322 { 00:27:43.322 "id": 3, 00:27:43.322 "state": "FREE", 00:27:43.322 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 4, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 5, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 6, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 7, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 8, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 9, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 10, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 11, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 12, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 13, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 14, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 15, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 16, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 17, 00:27:43.323 "state": "FREE", 00:27:43.323 "validity": 0.0 00:27:43.323 } 00:27:43.323 ], 00:27:43.323 "read-only": true 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "name": "cache_device", 00:27:43.323 "type": "bdev", 00:27:43.323 "chunks": [ 00:27:43.323 { 00:27:43.323 "id": 0, 00:27:43.323 "state": "OPEN", 00:27:43.323 "utilization": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 1, 00:27:43.323 "state": "OPEN", 00:27:43.323 "utilization": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 2, 00:27:43.323 "state": "FREE", 00:27:43.323 "utilization": 0.0 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "id": 3, 00:27:43.323 "state": "FREE", 00:27:43.323 "utilization": 0.0 00:27:43.323 } 00:27:43.323 ], 00:27:43.323 "read-only": true 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "name": "verbose_mode", 00:27:43.323 "value": true, 00:27:43.323 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:43.323 }, 00:27:43.323 { 00:27:43.323 "name": "prep_upgrade_on_shutdown", 00:27:43.323 "value": false, 00:27:43.323 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:43.323 } 00:27:43.323 ] 00:27:43.323 } 00:27:43.323 07:42:52 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:43.323 07:42:52 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.323 07:42:52 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:43.584 07:42:52 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:43.585 07:42:52 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:43.585 07:42:52 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:43.585 07:42:52 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:43.585 07:42:52 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:43.846 Validate MD5 checksum, iteration 1 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:43.846 07:42:52 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.846 07:42:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.846 07:42:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.846 07:42:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.846 07:42:52 -- ftl/common.sh@154 -- # return 0 00:27:43.846 07:42:52 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.846 [2024-11-19 07:42:53.013594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:43.846 [2024-11-19 07:42:53.014035] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79500 ] 00:27:44.107 [2024-11-19 07:42:53.162614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.107 [2024-11-19 07:42:53.334374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.089  [2024-11-19T07:42:55.600Z] Copying: 574/1024 [MB] (574 MBps) [2024-11-19T07:42:56.985Z] Copying: 1024/1024 [MB] (average 592 MBps) 00:27:47.735 00:27:47.735 07:42:56 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:47.735 07:42:56 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@103 -- # sum=b49e89b6ae218d4ab2e61ab168eecdbb 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@105 -- # [[ b49e89b6ae218d4ab2e61ab168eecdbb != \b\4\9\e\8\9\b\6\a\e\2\1\8\d\4\a\b\2\e\6\1\a\b\1\6\8\e\e\c\d\b\b ]] 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:49.646 Validate MD5 checksum, iteration 2 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:49.646 07:42:58 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:49.646 07:42:58 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:49.646 07:42:58 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:49.646 07:42:58 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:49.646 07:42:58 -- ftl/common.sh@154 -- # return 0 00:27:49.646 07:42:58 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:49.646 [2024-11-19 07:42:58.616722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:49.646 [2024-11-19 07:42:58.616960] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79567 ] 00:27:49.646 [2024-11-19 07:42:58.759549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.907 [2024-11-19 07:42:58.901463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.291  [2024-11-19T07:43:01.112Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-19T07:43:04.411Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:27:55.161 00:27:55.161 07:43:04 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:55.161 07:43:04 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@103 -- # sum=d1e05071c43a0237c319c2e942004150 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@105 -- # [[ d1e05071c43a0237c319c2e942004150 != \d\1\e\0\5\0\7\1\c\4\3\a\0\2\3\7\c\3\1\9\c\2\e\9\4\2\0\0\4\1\5\0 ]] 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:57.077 07:43:06 -- ftl/common.sh@137 -- # [[ -n 79453 ]] 00:27:57.077 07:43:06 -- ftl/common.sh@138 -- # kill -9 79453 00:27:57.077 07:43:06 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:57.077 07:43:06 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:57.077 07:43:06 -- ftl/common.sh@81 -- # local base_bdev= 00:27:57.077 07:43:06 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:57.077 07:43:06 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:57.077 07:43:06 -- ftl/common.sh@89 -- # spdk_tgt_pid=79650 00:27:57.077 07:43:06 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:57.077 07:43:06 -- ftl/common.sh@91 -- # waitforlisten 79650 00:27:57.077 07:43:06 -- common/autotest_common.sh@829 -- # '[' -z 79650 ']' 00:27:57.077 07:43:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.077 07:43:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.077 07:43:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.077 07:43:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.077 07:43:06 -- common/autotest_common.sh@10 -- # set +x 00:27:57.077 07:43:06 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.077 [2024-11-19 07:43:06.137316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:57.077 [2024-11-19 07:43:06.137556] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79650 ] 00:27:57.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79453 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:57.077 [2024-11-19 07:43:06.286502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.338 [2024-11-19 07:43:06.491721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:57.338 [2024-11-19 07:43:06.492116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.281 [2024-11-19 07:43:07.220460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.281 [2024-11-19 07:43:07.220546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.282 [2024-11-19 07:43:07.367709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.367769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:58.282 [2024-11-19 07:43:07.367785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:58.282 [2024-11-19 07:43:07.367793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.367857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.367871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:58.282 [2024-11-19 07:43:07.367880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:58.282 [2024-11-19 07:43:07.367889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.367912] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:58.282 [2024-11-19 07:43:07.369216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:58.282 [2024-11-19 07:43:07.369407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.369424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:58.282 [2024-11-19 07:43:07.369436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.498 ms 00:27:58.282 [2024-11-19 07:43:07.369444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.369826] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:58.282 [2024-11-19 07:43:07.388809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.389006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:58.282 [2024-11-19 07:43:07.389030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.983 ms 00:27:58.282 [2024-11-19 07:43:07.389038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.398863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.398910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:58.282 [2024-11-19 07:43:07.398923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:58.282 [2024-11-19 07:43:07.398931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.399322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.399353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:58.282 [2024-11-19 07:43:07.399363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:27:58.282 [2024-11-19 07:43:07.399371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.399407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.399417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:58.282 [2024-11-19 07:43:07.399425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:58.282 [2024-11-19 07:43:07.399435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.399462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.399471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:58.282 [2024-11-19 07:43:07.399479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:58.282 [2024-11-19 07:43:07.399487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.399516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:58.282 [2024-11-19 07:43:07.403020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.403214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:58.282 [2024-11-19 07:43:07.403233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.516 ms 00:27:58.282 [2024-11-19 07:43:07.403242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.403288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.403298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:58.282 [2024-11-19 07:43:07.403310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:58.282 [2024-11-19 07:43:07.403318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.403354] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:58.282 [2024-11-19 07:43:07.403375] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:58.282 [2024-11-19 07:43:07.403409] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:58.282 [2024-11-19 07:43:07.403424] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:58.282 [2024-11-19 07:43:07.403499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:58.282 [2024-11-19 07:43:07.403513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:58.282 [2024-11-19 07:43:07.403527] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:58.282 [2024-11-19 07:43:07.403537] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403545] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403553] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:58.282 [2024-11-19 07:43:07.403561] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:58.282 [2024-11-19 07:43:07.403568] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:58.282 [2024-11-19 07:43:07.403575] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:58.282 [2024-11-19 07:43:07.403582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.403590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:58.282 [2024-11-19 07:43:07.403598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.231 ms 00:27:58.282 [2024-11-19 07:43:07.403607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.403669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.282 [2024-11-19 07:43:07.403677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:58.282 [2024-11-19 07:43:07.403685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:58.282 [2024-11-19 07:43:07.403692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.282 [2024-11-19 07:43:07.403766] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:58.282 [2024-11-19 07:43:07.403776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:58.282 [2024-11-19 07:43:07.403784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:58.282 [2024-11-19 07:43:07.403812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:58.282 [2024-11-19 07:43:07.403825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:58.282 [2024-11-19 07:43:07.403833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:58.282 [2024-11-19 07:43:07.403839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:58.282 [2024-11-19 07:43:07.403853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:58.282 [2024-11-19 07:43:07.403863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:58.282 [2024-11-19 07:43:07.403878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:58.282 [2024-11-19 07:43:07.403898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:58.282 [2024-11-19 07:43:07.403905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.403912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:58.282 [2024-11-19 07:43:07.403919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:58.282 [2024-11-19 07:43:07.403925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:58.282 [2024-11-19 07:43:07.403939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:58.282 [2024-11-19 07:43:07.403946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:58.282 [2024-11-19 07:43:07.403959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:58.282 [2024-11-19 07:43:07.403966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:58.282 [2024-11-19 07:43:07.403979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:58.282 [2024-11-19 07:43:07.403985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.282 [2024-11-19 07:43:07.403992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:58.282 [2024-11-19 07:43:07.403998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:58.282 [2024-11-19 07:43:07.404004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.282 [2024-11-19 07:43:07.404011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:58.282 [2024-11-19 07:43:07.404018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:58.282 [2024-11-19 07:43:07.404025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.282 [2024-11-19 07:43:07.404031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:58.282 [2024-11-19 07:43:07.404037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:58.283 [2024-11-19 07:43:07.404043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.283 [2024-11-19 07:43:07.404049] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:58.283 [2024-11-19 07:43:07.404057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:58.283 [2024-11-19 07:43:07.404065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.283 [2024-11-19 07:43:07.404071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.283 [2024-11-19 07:43:07.404281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:58.283 [2024-11-19 07:43:07.404289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:58.283 [2024-11-19 07:43:07.404296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:58.283 [2024-11-19 07:43:07.404304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:58.283 [2024-11-19 07:43:07.404311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:58.283 [2024-11-19 07:43:07.404318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:58.283 [2024-11-19 07:43:07.404326] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:58.283 [2024-11-19 07:43:07.404335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404344] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:58.283 [2024-11-19 07:43:07.404351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:58.283 [2024-11-19 07:43:07.404381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:58.283 [2024-11-19 07:43:07.404388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:58.283 [2024-11-19 07:43:07.404396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:58.283 [2024-11-19 07:43:07.404403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:58.283 [2024-11-19 07:43:07.404440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:58.283 [2024-11-19 07:43:07.404447] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:58.283 [2024-11-19 07:43:07.404455] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404464] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.283 [2024-11-19 07:43:07.404472] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:58.283 [2024-11-19 07:43:07.404479] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:58.283 [2024-11-19 07:43:07.404486] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:58.283 [2024-11-19 07:43:07.404494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.404502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:58.283 [2024-11-19 07:43:07.404509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:27:58.283 [2024-11-19 07:43:07.404519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.420980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.421531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.283 [2024-11-19 07:43:07.421661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.412 ms 00:27:58.283 [2024-11-19 07:43:07.421689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.421758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.421781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.283 [2024-11-19 07:43:07.421801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:58.283 [2024-11-19 07:43:07.421820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.457876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.458054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.283 [2024-11-19 07:43:07.458139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.979 ms 00:27:58.283 [2024-11-19 07:43:07.458165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.458244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.458318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.283 [2024-11-19 07:43:07.458339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:58.283 [2024-11-19 07:43:07.458398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.458537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.458566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.283 [2024-11-19 07:43:07.458630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:27:58.283 [2024-11-19 07:43:07.458652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.458714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.458779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.283 [2024-11-19 07:43:07.458804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:58.283 [2024-11-19 07:43:07.458823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.477772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.477941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.283 [2024-11-19 07:43:07.478024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.912 ms 00:27:58.283 [2024-11-19 07:43:07.478196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.478367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.478441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:58.283 [2024-11-19 07:43:07.478489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:58.283 [2024-11-19 07:43:07.478512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.497692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.497859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:58.283 [2024-11-19 07:43:07.497925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.147 ms 00:27:58.283 [2024-11-19 07:43:07.497948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.283 [2024-11-19 07:43:07.508040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.283 [2024-11-19 07:43:07.508201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.283 [2024-11-19 07:43:07.508266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:27:58.283 [2024-11-19 07:43:07.508309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.575280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.575517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.545 [2024-11-19 07:43:07.575591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 66.884 ms 00:27:58.545 [2024-11-19 07:43:07.575776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.575917] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.545 [2024-11-19 07:43:07.576029] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.545 [2024-11-19 07:43:07.576169] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.545 [2024-11-19 07:43:07.576265] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.545 [2024-11-19 07:43:07.576472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.576536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.545 [2024-11-19 07:43:07.576570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:27:58.545 [2024-11-19 07:43:07.576594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.576681] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.545 [2024-11-19 07:43:07.576764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.576785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.545 [2024-11-19 07:43:07.576806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:27:58.545 [2024-11-19 07:43:07.576826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.594047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.594232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.545 [2024-11-19 07:43:07.594302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.172 ms 00:27:58.545 [2024-11-19 07:43:07.594325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.603533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.603674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.545 [2024-11-19 07:43:07.603738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.545 [2024-11-19 07:43:07.603761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.603971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.545 [2024-11-19 07:43:07.604058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:58.545 [2024-11-19 07:43:07.604107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:58.545 [2024-11-19 07:43:07.604131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.545 [2024-11-19 07:43:07.604433] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:59.118 [2024-11-19 07:43:08.232568] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:59.118 [2024-11-19 07:43:08.232902] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:59.689 [2024-11-19 07:43:08.793151] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:59.689 [2024-11-19 07:43:08.793291] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.689 [2024-11-19 07:43:08.793307] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.689 [2024-11-19 07:43:08.793320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.793330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.689 [2024-11-19 07:43:08.793346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1189.081 ms 00:27:59.689 [2024-11-19 07:43:08.793355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.793403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.793413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.689 [2024-11-19 07:43:08.793422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.689 [2024-11-19 07:43:08.793430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.806255] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.689 [2024-11-19 07:43:08.806385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.806397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.689 [2024-11-19 07:43:08.806408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.937 ms 00:27:59.689 [2024-11-19 07:43:08.806416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.807126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.807158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:59.689 [2024-11-19 07:43:08.807168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.631 ms 00:27:59.689 [2024-11-19 07:43:08.807176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.809457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.809479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.689 [2024-11-19 07:43:08.809489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.241 ms 00:27:59.689 [2024-11-19 07:43:08.809499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.835697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.835751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:59.689 [2024-11-19 07:43:08.835764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.171 ms 00:27:59.689 [2024-11-19 07:43:08.835773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.835892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.835904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.689 [2024-11-19 07:43:08.835914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:59.689 [2024-11-19 07:43:08.835922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.837421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.837465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:59.689 [2024-11-19 07:43:08.837475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.478 ms 00:27:59.689 [2024-11-19 07:43:08.837482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.837520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.837528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.689 [2024-11-19 07:43:08.837537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.689 [2024-11-19 07:43:08.837545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.837621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.689 [2024-11-19 07:43:08.837633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.837641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.689 [2024-11-19 07:43:08.837652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:59.689 [2024-11-19 07:43:08.837660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.837722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.689 [2024-11-19 07:43:08.837732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.689 [2024-11-19 07:43:08.837740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:59.689 [2024-11-19 07:43:08.837748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.689 [2024-11-19 07:43:08.838823] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1470.632 ms, result 0 00:27:59.689 [2024-11-19 07:43:08.852079] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.689 [2024-11-19 07:43:08.868085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:59.689 [2024-11-19 07:43:08.876248] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.950 Validate MD5 checksum, iteration 1 00:27:59.950 07:43:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.950 07:43:08 -- common/autotest_common.sh@862 -- # return 0 00:27:59.950 07:43:08 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.950 07:43:08 -- ftl/common.sh@95 -- # return 0 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:59.950 07:43:08 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.950 07:43:08 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.950 07:43:08 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.950 07:43:08 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.950 07:43:08 -- ftl/common.sh@154 -- # return 0 00:27:59.950 07:43:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.950 [2024-11-19 07:43:09.030356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:59.950 [2024-11-19 07:43:09.030638] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79691 ] 00:27:59.950 [2024-11-19 07:43:09.172636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.210 [2024-11-19 07:43:09.310248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.596  [2024-11-19T07:43:11.418Z] Copying: 686/1024 [MB] (686 MBps) [2024-11-19T07:43:13.966Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:28:04.716 00:28:04.716 07:43:13 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:04.716 07:43:13 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@103 -- # sum=b49e89b6ae218d4ab2e61ab168eecdbb 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@105 -- # [[ b49e89b6ae218d4ab2e61ab168eecdbb != \b\4\9\e\8\9\b\6\a\e\2\1\8\d\4\a\b\2\e\6\1\a\b\1\6\8\e\e\c\d\b\b ]] 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:06.631 Validate MD5 checksum, iteration 2 00:28:06.631 07:43:15 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:06.632 07:43:15 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.632 07:43:15 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.632 07:43:15 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.632 07:43:15 -- ftl/common.sh@154 -- # return 0 00:28:06.632 07:43:15 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:06.632 [2024-11-19 07:43:15.546864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:06.632 [2024-11-19 07:43:15.547114] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79763 ] 00:28:06.632 [2024-11-19 07:43:15.695709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.632 [2024-11-19 07:43:15.832871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.016  [2024-11-19T07:43:17.837Z] Copying: 764/1024 [MB] (764 MBps) [2024-11-19T07:43:18.778Z] Copying: 1024/1024 [MB] (average 744 MBps) 00:28:09.528 00:28:09.528 07:43:18 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:09.528 07:43:18 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@103 -- # sum=d1e05071c43a0237c319c2e942004150 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@105 -- # [[ d1e05071c43a0237c319c2e942004150 != \d\1\e\0\5\0\7\1\c\4\3\a\0\2\3\7\c\3\1\9\c\2\e\9\4\2\0\0\4\1\5\0 ]] 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:10.911 07:43:20 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:11.172 07:43:20 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:11.172 07:43:20 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:11.172 07:43:20 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:11.172 07:43:20 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:11.172 07:43:20 -- ftl/common.sh@130 -- # [[ -n 79650 ]] 00:28:11.172 07:43:20 -- ftl/common.sh@131 -- # killprocess 79650 00:28:11.172 07:43:20 -- common/autotest_common.sh@936 -- # '[' -z 79650 ']' 00:28:11.172 07:43:20 -- common/autotest_common.sh@940 -- # kill -0 79650 00:28:11.172 07:43:20 -- common/autotest_common.sh@941 -- # uname 00:28:11.172 07:43:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:11.172 07:43:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79650 00:28:11.172 killing process with pid 79650 00:28:11.172 07:43:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:11.172 07:43:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:11.172 07:43:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79650' 00:28:11.172 07:43:20 -- common/autotest_common.sh@955 -- # kill 79650 00:28:11.172 07:43:20 -- common/autotest_common.sh@960 -- # wait 79650 00:28:11.744 [2024-11-19 07:43:20.754024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:11.744 [2024-11-19 07:43:20.766499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.766533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:11.744 [2024-11-19 07:43:20.766543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:11.744 [2024-11-19 07:43:20.766549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.766566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:11.744 [2024-11-19 07:43:20.768615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.768644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:11.744 [2024-11-19 07:43:20.768652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.038 ms 00:28:11.744 [2024-11-19 07:43:20.768658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.768844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.768855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:11.744 [2024-11-19 07:43:20.768861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:28:11.744 [2024-11-19 07:43:20.768867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.769961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.770061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:11.744 [2024-11-19 07:43:20.770073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 00:28:11.744 [2024-11-19 07:43:20.770079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.770943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.770957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:11.744 [2024-11-19 07:43:20.770967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:28:11.744 [2024-11-19 07:43:20.770973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.778571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.778598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:11.744 [2024-11-19 07:43:20.778606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.561 ms 00:28:11.744 [2024-11-19 07:43:20.778611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.782823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.782858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:11.744 [2024-11-19 07:43:20.782866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.184 ms 00:28:11.744 [2024-11-19 07:43:20.782872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.782935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.782942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:11.744 [2024-11-19 07:43:20.782949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:11.744 [2024-11-19 07:43:20.782954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.790389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.790414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:11.744 [2024-11-19 07:43:20.790420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.422 ms 00:28:11.744 [2024-11-19 07:43:20.790426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.744 [2024-11-19 07:43:20.797515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.744 [2024-11-19 07:43:20.797541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:11.744 [2024-11-19 07:43:20.797547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.064 ms 00:28:11.745 [2024-11-19 07:43:20.797552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.804580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.745 [2024-11-19 07:43:20.804680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:11.745 [2024-11-19 07:43:20.804691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.994 ms 00:28:11.745 [2024-11-19 07:43:20.804697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.811905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.745 [2024-11-19 07:43:20.811997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:11.745 [2024-11-19 07:43:20.812007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.164 ms 00:28:11.745 [2024-11-19 07:43:20.812013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.812036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:11.745 [2024-11-19 07:43:20.812047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.745 [2024-11-19 07:43:20.812055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.745 [2024-11-19 07:43:20.812061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:11.745 [2024-11-19 07:43:20.812067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.745 [2024-11-19 07:43:20.812162] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:11.745 [2024-11-19 07:43:20.812170] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9b9c7808-8c84-45dc-aad4-d7a509251050 00:28:11.745 [2024-11-19 07:43:20.812176] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:11.745 [2024-11-19 07:43:20.812190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:11.745 [2024-11-19 07:43:20.812196] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:11.745 [2024-11-19 07:43:20.812202] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:11.745 [2024-11-19 07:43:20.812207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:11.745 [2024-11-19 07:43:20.812213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:11.745 [2024-11-19 07:43:20.812218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:11.745 [2024-11-19 07:43:20.812224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:11.745 [2024-11-19 07:43:20.812228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:11.745 [2024-11-19 07:43:20.812233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.745 [2024-11-19 07:43:20.812239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:11.745 [2024-11-19 07:43:20.812246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:28:11.745 [2024-11-19 07:43:20.812254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.821888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.745 [2024-11-19 07:43:20.821913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:11.745 [2024-11-19 07:43:20.821921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.609 ms 00:28:11.745 [2024-11-19 07:43:20.821927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.822073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.745 [2024-11-19 07:43:20.822079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:11.745 [2024-11-19 07:43:20.822089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:28:11.745 [2024-11-19 07:43:20.822094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.856977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.857005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:11.745 [2024-11-19 07:43:20.857013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.857020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.857044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.857050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:11.745 [2024-11-19 07:43:20.857060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.857065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.857112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.857119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:11.745 [2024-11-19 07:43:20.857125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.857131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.857145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.857151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:11.745 [2024-11-19 07:43:20.857157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.857165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.916555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.916589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:11.745 [2024-11-19 07:43:20.916598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.916604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:11.745 [2024-11-19 07:43:20.939234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:11.745 [2024-11-19 07:43:20.939300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:11.745 [2024-11-19 07:43:20.939350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:11.745 [2024-11-19 07:43:20.939438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:11.745 [2024-11-19 07:43:20.939483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:11.745 [2024-11-19 07:43:20.939531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.745 [2024-11-19 07:43:20.939583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:11.745 [2024-11-19 07:43:20.939589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.745 [2024-11-19 07:43:20.939595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.745 [2024-11-19 07:43:20.939689] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 173.169 ms, result 0 00:28:12.737 07:43:21 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:12.737 07:43:21 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:12.737 07:43:21 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:12.737 07:43:21 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:12.737 07:43:21 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:12.737 07:43:21 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:12.737 Remove shared memory files 00:28:12.737 07:43:21 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:12.737 07:43:21 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:12.737 07:43:21 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:12.737 07:43:21 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:12.737 07:43:21 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79453 00:28:12.737 07:43:21 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:12.737 07:43:21 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:12.737 ************************************ 00:28:12.737 END TEST ftl_upgrade_shutdown 00:28:12.737 ************************************ 00:28:12.737 00:28:12.737 real 1m18.639s 00:28:12.737 user 1m50.247s 00:28:12.737 sys 0m17.676s 00:28:12.737 07:43:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:12.737 07:43:21 -- common/autotest_common.sh@10 -- # set +x 00:28:12.737 07:43:21 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:12.737 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:12.737 07:43:21 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:12.737 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:12.737 07:43:21 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:12.737 07:43:21 -- ftl/ftl.sh@14 -- # killprocess 70657 00:28:12.737 07:43:21 -- common/autotest_common.sh@936 -- # '[' -z 70657 ']' 00:28:12.737 07:43:21 -- common/autotest_common.sh@940 -- # kill -0 70657 00:28:12.737 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70657) - No such process 00:28:12.737 Process with pid 70657 is not found 00:28:12.737 07:43:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70657 is not found' 00:28:12.737 07:43:21 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:12.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.737 07:43:21 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79864 00:28:12.737 07:43:21 -- ftl/ftl.sh@20 -- # waitforlisten 79864 00:28:12.737 07:43:21 -- common/autotest_common.sh@829 -- # '[' -z 79864 ']' 00:28:12.737 07:43:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.737 07:43:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.737 07:43:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.737 07:43:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.737 07:43:21 -- common/autotest_common.sh@10 -- # set +x 00:28:12.737 07:43:21 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:12.737 [2024-11-19 07:43:21.705212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:12.737 [2024-11-19 07:43:21.705302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79864 ] 00:28:12.737 [2024-11-19 07:43:21.845148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.021 [2024-11-19 07:43:21.985110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:13.021 [2024-11-19 07:43:21.985448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.282 07:43:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.282 07:43:22 -- common/autotest_common.sh@862 -- # return 0 00:28:13.282 07:43:22 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:13.544 nvme0n1 00:28:13.544 07:43:22 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:13.544 07:43:22 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:13.544 07:43:22 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:13.804 07:43:22 -- ftl/common.sh@28 -- # stores=9233e873-0710-47e3-9d75-18b80f06815c 00:28:13.804 07:43:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:13.804 07:43:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9233e873-0710-47e3-9d75-18b80f06815c 00:28:13.804 07:43:23 -- ftl/ftl.sh@23 -- # killprocess 79864 00:28:13.804 07:43:23 -- common/autotest_common.sh@936 -- # '[' -z 79864 ']' 00:28:13.804 07:43:23 -- common/autotest_common.sh@940 -- # kill -0 79864 00:28:13.804 07:43:23 -- common/autotest_common.sh@941 -- # uname 00:28:13.805 07:43:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:13.805 07:43:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79864 00:28:14.066 07:43:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:14.066 07:43:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:14.066 killing process with pid 79864 00:28:14.066 07:43:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79864' 00:28:14.066 07:43:23 -- common/autotest_common.sh@955 -- # kill 79864 00:28:14.066 07:43:23 -- common/autotest_common.sh@960 -- # wait 79864 00:28:15.452 07:43:24 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:15.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:15.452 Waiting for block devices as requested 00:28:15.452 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.452 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.714 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.714 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.004 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:21.004 07:43:29 -- ftl/ftl.sh@28 -- # remove_shm 00:28:21.004 Remove shared memory files 00:28:21.004 07:43:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.004 07:43:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.004 07:43:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.004 07:43:29 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:21.004 07:43:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.004 07:43:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.004 ************************************ 00:28:21.004 END TEST ftl 00:28:21.004 ************************************ 00:28:21.004 00:28:21.004 real 13m15.859s 00:28:21.004 user 15m27.145s 00:28:21.004 sys 1m3.751s 00:28:21.004 07:43:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:21.004 07:43:29 -- common/autotest_common.sh@10 -- # set +x 00:28:21.004 07:43:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:21.004 07:43:29 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:21.004 07:43:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:21.004 07:43:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:21.004 07:43:29 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:21.004 07:43:29 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:21.004 07:43:29 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:21.004 07:43:29 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:21.004 07:43:29 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:21.004 07:43:29 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:21.004 07:43:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.004 07:43:29 -- common/autotest_common.sh@10 -- # set +x 00:28:21.004 07:43:29 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:21.004 07:43:29 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:21.004 07:43:29 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:21.004 07:43:29 -- common/autotest_common.sh@10 -- # set +x 00:28:22.390 INFO: APP EXITING 00:28:22.390 INFO: killing all VMs 00:28:22.390 INFO: killing vhost app 00:28:22.390 INFO: EXIT DONE 00:28:22.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.913 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:22.913 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:22.913 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:22.913 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:23.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:23.484 Cleaning 00:28:23.484 Removing: /var/run/dpdk/spdk0/config 00:28:23.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:23.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:23.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:23.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:23.484 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:23.484 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:23.484 Removing: /var/run/dpdk/spdk0 00:28:23.484 Removing: /var/run/dpdk/spdk_pid55986 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56187 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56481 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56587 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56695 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56807 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56905 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56950 00:28:23.745 Removing: /var/run/dpdk/spdk_pid56992 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57056 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57162 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57586 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57645 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57697 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57713 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57811 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57822 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57920 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57944 00:28:23.745 Removing: /var/run/dpdk/spdk_pid57997 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58015 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58068 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58086 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58249 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58291 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58374 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58451 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58482 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58549 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58575 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58616 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58642 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58683 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58709 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58745 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58771 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58812 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58838 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58879 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58901 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58948 00:28:23.745 Removing: /var/run/dpdk/spdk_pid58968 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59009 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59031 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59083 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59113 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59154 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59180 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59221 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59247 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59288 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59314 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59361 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59392 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59433 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59460 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59501 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59527 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59574 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59595 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59635 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59670 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59714 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59743 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59787 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59813 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59854 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59880 00:28:23.745 Removing: /var/run/dpdk/spdk_pid59922 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60000 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60113 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60295 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60381 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60418 00:28:23.745 Removing: /var/run/dpdk/spdk_pid60853 00:28:23.745 Removing: /var/run/dpdk/spdk_pid61211 00:28:23.745 Removing: /var/run/dpdk/spdk_pid61315 00:28:23.745 Removing: /var/run/dpdk/spdk_pid61368 00:28:23.745 Removing: /var/run/dpdk/spdk_pid61394 00:28:23.745 Removing: /var/run/dpdk/spdk_pid61471 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62124 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62161 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62632 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62753 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62868 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62921 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62941 00:28:23.745 Removing: /var/run/dpdk/spdk_pid62972 00:28:23.745 Removing: /var/run/dpdk/spdk_pid64884 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65019 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65033 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65045 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65100 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65104 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65116 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65167 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65171 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65183 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65248 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65252 00:28:23.745 Removing: /var/run/dpdk/spdk_pid65264 00:28:23.745 Removing: /var/run/dpdk/spdk_pid66676 00:28:23.745 Removing: /var/run/dpdk/spdk_pid66787 00:28:23.745 Removing: /var/run/dpdk/spdk_pid66918 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67005 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67087 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67163 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67262 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67331 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67472 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67852 00:28:23.745 Removing: /var/run/dpdk/spdk_pid67889 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68321 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68505 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68605 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68709 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68762 00:28:24.006 Removing: /var/run/dpdk/spdk_pid68786 00:28:24.006 Removing: /var/run/dpdk/spdk_pid69142 00:28:24.006 Removing: /var/run/dpdk/spdk_pid69216 00:28:24.006 Removing: /var/run/dpdk/spdk_pid69295 00:28:24.006 Removing: /var/run/dpdk/spdk_pid69695 00:28:24.006 Removing: /var/run/dpdk/spdk_pid69842 00:28:24.006 Removing: /var/run/dpdk/spdk_pid70657 00:28:24.006 Removing: /var/run/dpdk/spdk_pid70783 00:28:24.006 Removing: /var/run/dpdk/spdk_pid71032 00:28:24.006 Removing: /var/run/dpdk/spdk_pid71124 00:28:24.006 Removing: /var/run/dpdk/spdk_pid71419 00:28:24.006 Removing: /var/run/dpdk/spdk_pid71668 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72051 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72273 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72398 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72459 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72613 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72642 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72698 00:28:24.006 Removing: /var/run/dpdk/spdk_pid72939 00:28:24.006 Removing: /var/run/dpdk/spdk_pid73204 00:28:24.006 Removing: /var/run/dpdk/spdk_pid73810 00:28:24.006 Removing: /var/run/dpdk/spdk_pid74627 00:28:24.006 Removing: /var/run/dpdk/spdk_pid75178 00:28:24.006 Removing: /var/run/dpdk/spdk_pid76011 00:28:24.006 Removing: /var/run/dpdk/spdk_pid76166 00:28:24.006 Removing: /var/run/dpdk/spdk_pid76255 00:28:24.006 Removing: /var/run/dpdk/spdk_pid76686 00:28:24.006 Removing: /var/run/dpdk/spdk_pid76750 00:28:24.006 Removing: /var/run/dpdk/spdk_pid77308 00:28:24.006 Removing: /var/run/dpdk/spdk_pid77939 00:28:24.006 Removing: /var/run/dpdk/spdk_pid78879 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79003 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79051 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79111 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79174 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79228 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79453 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79500 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79567 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79650 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79691 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79763 00:28:24.006 Removing: /var/run/dpdk/spdk_pid79864 00:28:24.006 Clean 00:28:24.006 killing process with pid 48174 00:28:24.006 killing process with pid 48181 00:28:24.006 07:43:33 -- common/autotest_common.sh@1446 -- # return 0 00:28:24.006 07:43:33 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:24.006 07:43:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.006 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:28:24.268 07:43:33 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:24.268 07:43:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.268 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:28:24.268 07:43:33 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:24.268 07:43:33 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:24.268 07:43:33 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:24.268 07:43:33 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:24.268 07:43:33 -- spdk/autotest.sh@383 -- # hostname 00:28:24.268 07:43:33 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:24.268 geninfo: WARNING: invalid characters removed from testname! 00:28:50.862 07:43:55 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:50.862 07:43:59 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.779 07:44:01 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.695 07:44:03 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:56.609 07:44:05 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:57.997 07:44:07 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.381 07:44:08 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:59.642 07:44:08 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:59.642 07:44:08 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:59.642 07:44:08 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:59.642 07:44:08 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:59.642 07:44:08 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:59.642 07:44:08 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:59.642 07:44:08 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:59.642 07:44:08 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:59.642 07:44:08 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:59.642 07:44:08 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:59.642 07:44:08 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:59.642 07:44:08 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:59.642 07:44:08 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:59.642 07:44:08 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:59.642 07:44:08 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:59.642 07:44:08 -- scripts/common.sh@343 -- $ case "$op" in 00:28:59.642 07:44:08 -- scripts/common.sh@344 -- $ : 1 00:28:59.642 07:44:08 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:59.642 07:44:08 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.642 07:44:08 -- scripts/common.sh@364 -- $ decimal 1 00:28:59.642 07:44:08 -- scripts/common.sh@352 -- $ local d=1 00:28:59.642 07:44:08 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:59.642 07:44:08 -- scripts/common.sh@354 -- $ echo 1 00:28:59.642 07:44:08 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:59.642 07:44:08 -- scripts/common.sh@365 -- $ decimal 2 00:28:59.642 07:44:08 -- scripts/common.sh@352 -- $ local d=2 00:28:59.642 07:44:08 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:59.642 07:44:08 -- scripts/common.sh@354 -- $ echo 2 00:28:59.642 07:44:08 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:59.642 07:44:08 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:59.642 07:44:08 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:59.642 07:44:08 -- scripts/common.sh@367 -- $ return 0 00:28:59.642 07:44:08 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.642 07:44:08 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:59.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.642 --rc genhtml_branch_coverage=1 00:28:59.642 --rc genhtml_function_coverage=1 00:28:59.642 --rc genhtml_legend=1 00:28:59.642 --rc geninfo_all_blocks=1 00:28:59.642 --rc geninfo_unexecuted_blocks=1 00:28:59.642 00:28:59.642 ' 00:28:59.642 07:44:08 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:59.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.642 --rc genhtml_branch_coverage=1 00:28:59.642 --rc genhtml_function_coverage=1 00:28:59.642 --rc genhtml_legend=1 00:28:59.642 --rc geninfo_all_blocks=1 00:28:59.642 --rc geninfo_unexecuted_blocks=1 00:28:59.642 00:28:59.642 ' 00:28:59.642 07:44:08 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:59.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.642 --rc genhtml_branch_coverage=1 00:28:59.642 --rc genhtml_function_coverage=1 00:28:59.642 --rc genhtml_legend=1 00:28:59.642 --rc geninfo_all_blocks=1 00:28:59.642 --rc geninfo_unexecuted_blocks=1 00:28:59.642 00:28:59.642 ' 00:28:59.642 07:44:08 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:59.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.642 --rc genhtml_branch_coverage=1 00:28:59.642 --rc genhtml_function_coverage=1 00:28:59.642 --rc genhtml_legend=1 00:28:59.642 --rc geninfo_all_blocks=1 00:28:59.642 --rc geninfo_unexecuted_blocks=1 00:28:59.642 00:28:59.642 ' 00:28:59.642 07:44:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.642 07:44:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:59.642 07:44:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.642 07:44:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.642 07:44:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.642 07:44:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.642 07:44:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.642 07:44:08 -- paths/export.sh@5 -- $ export PATH 00:28:59.642 07:44:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.642 07:44:08 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:59.642 07:44:08 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:59.642 07:44:08 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732002248.XXXXXX 00:28:59.642 07:44:08 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732002248.xtUFX6 00:28:59.642 07:44:08 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:59.642 07:44:08 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:59.642 07:44:08 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:59.642 07:44:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:59.642 07:44:08 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:59.642 07:44:08 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:59.642 07:44:08 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:59.642 07:44:08 -- common/autotest_common.sh@10 -- $ set +x 00:28:59.642 07:44:08 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:59.642 07:44:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:59.642 07:44:08 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:59.642 07:44:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:59.642 07:44:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:59.642 07:44:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:59.642 07:44:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:59.642 07:44:08 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:59.643 07:44:08 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:59.643 07:44:08 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:59.643 07:44:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:59.643 + [[ -n 4995 ]] 00:28:59.643 + sudo kill 4995 00:28:59.653 [Pipeline] } 00:28:59.669 [Pipeline] // timeout 00:28:59.675 [Pipeline] } 00:28:59.689 [Pipeline] // stage 00:28:59.695 [Pipeline] } 00:28:59.710 [Pipeline] // catchError 00:28:59.719 [Pipeline] stage 00:28:59.721 [Pipeline] { (Stop VM) 00:28:59.734 [Pipeline] sh 00:29:00.020 + vagrant halt 00:29:02.563 ==> default: Halting domain... 00:29:09.155 [Pipeline] sh 00:29:09.488 + vagrant destroy -f 00:29:12.022 ==> default: Removing domain... 00:29:12.295 [Pipeline] sh 00:29:12.575 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:12.584 [Pipeline] } 00:29:12.600 [Pipeline] // stage 00:29:12.606 [Pipeline] } 00:29:12.620 [Pipeline] // dir 00:29:12.626 [Pipeline] } 00:29:12.642 [Pipeline] // wrap 00:29:12.649 [Pipeline] } 00:29:12.662 [Pipeline] // catchError 00:29:12.672 [Pipeline] stage 00:29:12.674 [Pipeline] { (Epilogue) 00:29:12.688 [Pipeline] sh 00:29:12.968 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:18.248 [Pipeline] catchError 00:29:18.250 [Pipeline] { 00:29:18.263 [Pipeline] sh 00:29:18.544 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:18.544 Artifacts sizes are good 00:29:18.552 [Pipeline] } 00:29:18.566 [Pipeline] // catchError 00:29:18.577 [Pipeline] archiveArtifacts 00:29:18.584 Archiving artifacts 00:29:18.713 [Pipeline] cleanWs 00:29:18.727 [WS-CLEANUP] Deleting project workspace... 00:29:18.727 [WS-CLEANUP] Deferred wipeout is used... 00:29:18.751 [WS-CLEANUP] done 00:29:18.753 [Pipeline] } 00:29:18.768 [Pipeline] // stage 00:29:18.773 [Pipeline] } 00:29:18.786 [Pipeline] // node 00:29:18.792 [Pipeline] End of Pipeline 00:29:18.853 Finished: SUCCESS